Skip to Main Content
United States - Flag United States

Please confirm your currency selection:

  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
  • Loading...
Applications & Technologies

TensorFlow Lite for MCUs is AI on the Edge

waves of light entering organic left-side of brain, with the right side of the brain represented in binary with circuits

Image Source: Laurent T/

By Michael Parks for Mouser Electronics

Published May 5, 2020

The history of technological progress is full of examples of technologies evolving independently before converging to change the world. Atomic energy and jet engines converged to give rise to the nuclear aircraft carriers that redefined warfare for much of the 20th century. Computer and radio frequency communications converged to give us the smartphone, and in doing so, redefined how we all interacted with technology and each other. Today, the convergence of embedded electronics and artificial intelligence (AI) is increasingly poised to be one of the next game-changing technical unions. Let's take a look at the evolution of this convergence.

Welcome to the Edge

The notion of AI can be found in writings that date as far back as the ancient Greeks, although it would not be until the first half of the 20th century before the initial concerted efforts to develop AI as an actual technology would emerge. Fundamentally, AI enables digital technology to interact with the analog world efficiently and responsively, akin to the human brain. For real-world practical applications of AI to have utility, think autonomous vehicles, the interaction between the electronics and the physical world must be nearly instantaneous while processing multiple dynamic inputs. Thankfully, embedded electronics systems have continued to evolve alongside the development of machine-learning algorithms. Their marriage is giving rise to the concept of edge computing.

Edge computing takes the processing power that has historically been only achievable with powerful processing hardware in the cloud and brings it local devices that are on the edge of the physical-digital interface. Combined with the ubiquity of inexpensive yet robust embedded components such as microcontrollers and sensors, and the result is a revolution in automation, both in terms of scale and capabilities.

TensorFlow Lite: Big ML Algorithms on Tiny Hardware

TensorFlow, a Google-led effort, is a set of open-source software libraries that enable developers to easily integrate complex numerical computation algorithms and machine learning (ML) into their projects (Figure 1). According to Google, these libraries provide stable application programming interfaces for Python (Python 3.7+ across all platforms) and C. Also, they provide APIs without backward compatibility guarantees for C++, Go, Java and JavaScript. Additionally, an alpha release is available for Apple's Swift language.

Google's TensorFlow Lite for Microcontroller website.

Figure 1: Google's TensorFlow Lite for Microcontroller website. (Source: Google)

TensorFlow offers so-called end-to-end machine learning support for the development and utilization of deep neural networks (DNN). DNNs are an implementation of ML that are particularly adept at pattern recognition and object detection and classification. TensorFlow libraries support both phases of the machine-learning process, which are training and inferencing. The first is the training of deep neural networks that requires significant computing horsepower typically found in server-grade hardware and graphical processing units (GPUs). More recently application-specific integrated circuits known as Tensor Processing Unit (TPUs) have been developed to support the training efforts. The second phase, inferencing, is utilizing the trained DNNs in the real-world to respond to new inputs and make recommendations based on the analysis of those inputs against the trained models. This is the phase that should be of keen interest to embedded product developers.

The release of TensorFlow Lite for Microcontrollers (a subset of the TensorFlow libraries) is specifically geared for performing inferencing on memory-constrained devices typically found in most embedded systems applications. It does not allow you to train new networks. That still requires the higher-end hardware.

Practically Speaking: ML Application Use Cases

Terms such as artificial intelligence, neural networks, and machine learning can come across as either science fiction or jargon. So, what are the practical implications of these emerging technologies?

The goal of AI-based algorithms running on embedded systems is to process real-world data collected by sensors in ways that are more efficient than more allowed by the more traditional procedural or object-oriented programming methodologies. Perhaps the most visible use case in our collective consciousness is the progression from legacy automobiles, to cars with automation assistance–such as lane departure warning alerts and collision avoidance systems–to the ultimate goal of self-driving cars with no human in the control loop. However, many other less-conspicuous uses of deep learning are already being used even if you did not know it. Voice recognition in your smartphone or virtual assistants such as Amazon Alexa leverage deep-learning algorithms. Other uses include facial detection for security applications and or background replacement, sans green screen, in remote meeting software such as Zoom.

One massive advantage of devices that leverage both machine-learning algorithms and internet connectivity, such as IoT devices, is that products can integrate new or better-trained models over time with a simple over-the-air firmware update. This means products can get smarter with time and are not limited to those functionalities that were possible at the time of their manufacturing–so long as the new models and firmware still fit within the physical memory and processing capacity of the hardware.

Translating a TensorFlow model to a version that can be used aboard a memory-constrained device such as a microcontroller.

Figure 2: Translating a TensorFlow model to a version that can be used aboard a memory-constrained device such as a microcontroller. (Source: NXP)

The Workflow

According to the documentation provided for TensorFlow Lite for Microcontrollers, the developer workflow can be broken down into five keys steps (Figure 2). These steps are:

  1. Create or Obtain a TensorFlow Model: The model must be small enough to fit on your target device after conversion, and it can only use supported operations. If you want to use operations that are not currently supported, you can provide your custom implementation.
  2. Convert the Model to a TensorFlow Lite FlatBuffer: You will convert your model into the standard TensorFlow Lite format using the TensorFlow Lite converter. You might wish to output a quantized model since these are smaller in size and more efficient to execute.
  3. Convert the FlatBuffer to a C byte array: Models are kept in read-only program memory and provided in the form of a simple C file. Standard tools can be used to convert the FlatBuffer into a C array.
  4. Integrate the TensorFlow Lite for Microcontrollers C++ Library: Write your microcontroller code to collect data, perform inference using the C++ library, and make use of the results.
  5. Deploy to your Device: Build and deploy the program to your device.

Some caveats that a developer should be aware of when selecting a compatible embedded platform for use with TensorFlow Lite libraries include:

  1. 32-bit architecture such as Arm Cortex-M processors and ESP32-based systems.
  2. It can run on systems where memory size is measured in the tens of kilobytes.
  3. TensorFlow Lite for Microcontrollers is written in C++ 11.
  4. TensorFlow Lite for Microcontrollers is available as an Arduino library. The framework can also generate projects for other development environments such as Mbed.
  5. No need for operating system support, dynamic memory allocation or any of the C/C++ standard libraries.

Next Steps

Google offers four pre-trained models as examples that can be used to run on embedded platforms. With a few slight modifications, they can be used on various development boards. Examples include:

  • Hello World: Demonstrates the absolute basics of using TensorFlow Lite for Microcontrollers
  • Micro-Speech: Captures audio with a microphone to detect the words "yes" and "no".
  • Person Deflection: Captures camera data with an image sensor to detect the presence or absence of a person.
  • Magic Wand: Captures accelerometer data to classify three different physical gestures.

Over the next few months, you can expect a series of step-by-step guides that will show you how to get these models working on various different microcontroller platforms (Figure 3) including:

Figure 3: The development boards that will be used in this series of projects. Clockwise from top-left: NXP i.MX RT1060, Infineon XMC 4700 Relax, SiLabs SLSTK3701A EFM32 GG11 Starter Kit, Microchip SAM E54 Xplained Pro. (Source: Mouser)

Author Bio

Michael Parks, P.E. is a contributing writer for Mouser Electronics and the owner of Green Shoe Garage, a custom electronics design studio and technology consultancy located in Southern Maryland. He produces the S.T.E.A.M. Power Podcast to help raise public awareness of technical and scientific matters. Michael is also a licensed Professional Engineer in the state of Maryland and holds a Master's degree in systems engineering from Johns Hopkins University.