United States - Flag United States

Please confirm your currency selection:

Bench Talk for Design Engineers

Bench Talk


Bench Talk for Design Engineers | The Official Blog of Mouser Electronics

Closer to the Network Edge Mark Patrick

(Source: Yurchanka Siarhei/Shutterstock.com)

Introduction to Machine Learning 

Machine learning (ML) lies at the heart of most artificial intelligence (AI) applications and involves teaching a computer to identify data patterns. More specifically, the goal is to create a trained model. This can be done with supervised learning, where the computer is shown examples from which to learn. Alternatively, the process can be unsupervised—the computer simply looks for interesting patterns in the data. Techniques involving continuous or ongoing learning, where the computer learns from its mistakes, also exist but are outside this article’s scope.

Running Your ML Model

Once the ML model has been created, it can be applied to the job at hand. Models can be used for forecasting future events, identifying anomalies, and image or speech recognition. In nearly all cases, models rely on large deep-tree structures and need significant computing power to run. This is especially true for models engaged in image and voice recognition, which are usually based on artificial neural networks. Neural networks create dense meshes and need to run on highly parallelized hardware, often based on GPUs. Until recently, such power has only been available from cloud-based service providers, such as Amazon Web Services (AWS) or Azure.

To get some idea of the power required, the following table shows the specifications of AWS P3 instances, a processing platform optimized for ML applications (Table 1).

Table 1: Specification of AWS P3 Instances






Storage BW

Network BW






























These P3 instances are significant machines. They have huge amounts of RAM along with extremely fast network and storage access. Above all, they have significant CPU and GPU processing power, a requirement that makes running ML models at the network edge a real challenge.

The Drawbacks of Centralized AI

To date, most well-known AI applications have relied on the cloud because it is so hard to run ML models at the edge. However, this dependence on cloud computing imposes some limitations on using AI. Here is a list of some of the operational drawbacks to centralized AI: 

Some Applications Can’t Run in the Cloud

To operate AI in the cloud, a reliable network connection of adequate capacity is required. If this is not available, perhaps because of a lack of infrastructure, some AI applications have to run locally. In other words, these applications only work if you can run your ML models at the edge. 

Consider the example of the self-driving vehicle; it needs to do several tasks that rely on machine learning. The most important of these tasks is object detection and avoidance. This requires quite demanding ML models that need a significant degree of computing power. However, even networked cars have only low bandwidth connections, and these connections are inconsistent (although 5G might improve this). 

This limitation also applies when creating smart Internet of Things (IoT) monitoring systems for mining and other heavy industries. A fast network is often available locally, but internet connectivity might be reliant on a satellite uplink.

Latency Matters

Many ML applications need to work in real-time. As previously mentioned, self-driving cars are such an application, but there are also applications such as real-time facial recognition. This can be used for door-entry systems or security purposes; for example, police forces often use this technology to monitor crowds to potentially identify known trouble-makers at sporting and other events. 

AI is also increasingly being used to create smart medical devices. Some of these devices need to work in real-time to deliver real benefits, but the average round trip time to connect to a data center is typically 10ms-100ms. Real-time applications are, therefore, hard to achieve without moving ML models nearer to the network edge.

Security Might Be an Issue

Several ML applications deal with secure or sensitive data. It is clearly possible to send this data across the network and securely store it in the cloud. However, local policies often forbid that practice. Health data is especially sensitive, and many countries have strict rules about sending it to a cloud server. Overall, it is always easier to ensure the device that is only connected to a local network. 


Subscriptions to ML-optimised cloud instances can be expensive—the lowest spec instance shown in Table 1 costs around $3 (USD) an hour. Many cloud providers charge additional fees that will also need to be considered, such as storage and network access. Realistically, running an AI application could cost up to $3,000 (USD) a month.


Implementing successful machine learning has typically required cloud or server-based resources with significant compute power. However, as applications evolve and new use cases emerge, moving machine learning to the network edge becomes more compelling, especially when latency, security, and implementation cost are primary considerations.

« Back

Part of Mouser's EMEA team in Europe, Mark joined Mouser Electronics in July 2014 having previously held senior marketing roles at RS Components. Prior to RS, Mark spent 8 years at Texas Instruments in Applications Support and Technical Sales roles and holds a first class Honours Degree in Electronic Engineering from Coventry University.

All Authors

Show More Show More
View Blogs by Date