You may or may not know what containers are. But chances are you’ve heard of a virtual machine. Virtual Machines (VMs) are a “soft computer” hosted by a real computer that provide a means of testing or isolation without having to reformat and totally reinstall a computer Operating System (OS) every time you want to start a new test on a clean machine. VMs are also great for protecting a computer from viruses and hackers; though the VM perfoms and behaves like a real computer, it does not have hard access to the computer and can be erased or otherwise obliterated with little impact on its host computer (and OS).
VMs are great, but like a real computer, they contain heavy software packages that must be unpacked. Sometimes you need a VM for isolation or perhaps you need to recreate an exact run time scenario so someone halfway across the world has your exact setup to help you troubleshoot, program, test, or tweak. A VM saves a huge amount of time by automating the re-creation of a machine’s state for others or for posterity. Here, “posterity” can be a test or development set up for all the operating systems that your product is still backwards-compatible to, and every now and then you have to resurrect the VM imprint of the older OS and fix the drivers. VMs are great for that. But what if you don’t really need an entire VM? There’s a solution for that, and it’s called a container.
Containers are not VMs. They are small microcosms including just enough environmental variables to simulate specific applications. Containers are sort of like VMs in that they save time and provide isolation, but they do not create a virtual machine with all the trappings of a complete operating system. Containers are lightweight software packages. Linux developers began using containers to provide individual, isolated Linux environments running on a single host. VMs create their own virtual operating system, whereas containers share the same operating system but might have their own libraries or shared libraries. Containers are created for isolating application processes to keep them from interfering with each other, too.
Docker is a type of container that utilizes an open source community of sharing where containers are made and reused. Standard containers are created by one group of developers who are experts in their product environment, and others using that environment might download that software container image and use it as a framework for establishing all the particulars surrounding their software program. The container is an image that gets “packed” by the subsequent user. The “packing” is the act of simply setting all the variables to be where they need to be for the end user, rather than sending a huge document with instructions on what settings to set. The container image is sent with specific software contents to another person, who only has to deal with the container image and not all the tidbits of settings information in order to have the software work as intended. Software is getting very complicated, and containers help us deal with that complexity immensely, with accuracy and expedience. If you receive a container and unpack it, the environment variables are all set for you on your particular OS.
In the embedded world, this is a boon to setting up complex System Development Kits (SDKs). Embedded developers must work from a host in most cases, and that host may be running any one of several OSes such as Windows 10, Win 8, Linux, or some iteration of OS X for Macintosh. Instead of requiring a developer to find a Linux machine to work on, the makers of the SDK can create Docker containers for several potential host OSes.
The lines between embedded applications, software appliances, servers, and what-have-you are beginning to blur. People are now referring to some types of embedded development as “deeply embedded.” The old days when the embedded playing field was flooded with 8-bit MCUs are getting elbowed out by very affordable 32-bit SoCs, some of which come with (nearly) turn-key reference designs. When you hear someone talking about an embedded application, you could be talking about a 300MB image or a 300kB image, both of which include operating systems. The Femto OS runs with 1K to 4K flash and “somewhere around 2K[B] for the OS itself….”
Containers help developers work efficiently and effectively with each other and can keep applications from interfering with each other. It’s likely that the juggernaut open source tool that is Docker holds much promise to make a fundamental shift in how we work together, much like Linux has done. Both enable regular people to innovate without having to reinvent the wheel. However, whereas Linux had to create its own tools as it went along (e.g., open source, the community concept, and git), Docker is utilizing Linux tools and processes to an extent.
Lynnette Reese holds a B.S.E.E from Louisiana State University in Baton Rouge. Lynnette has worked at Mouser Electronics, Texas Instruments, Freescale (now NXP), and Cypress Semiconductor. Lynnette has three kids and occasionally runs benign experiments on them. She is currently saving for the kids’ college and eventual therapy once they find out that cauliflower isn’t a rare albino broccoli (and other white lies.)
Connect with Us
Privacy Center |
Terms and Conditions
Copyright ©2019 Mouser Electronics, Inc. - A TTI and Berkshire Hathaway company.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics center in Mansfield, Texas USA.