Originally posted in Texas Instruments' E2E Community Blog.
Designing power supplies for factory-automation equipment such as programmable logic controllers , transmitters, automation machinery and human machine interfaces can come with a lot of challenges. Even as processing power continues to increase, printed circuit board (PCB) area and overall equipment sizes tend to remain the same. To meet these strict space constraints, power-supply designs should be compact but also operate efficiently and quietly; heat and noise are absolutely not permissible. In addition, there are multiple industrial power-supply requirements, including a wide-input voltage range, a small solution size and the ability to operate at a high temperature range. Power-supply designers must keep component counts and costs down while providing a reliable solution that doesn’t require a lot of debugging. So starting with an integrated and robust device is a high priority.
Figure 1: The left image is a 60V 2A discrete solution, with all passives, and inductor chosen by the designer. The right image is an equivalent design using a power module. Most of the passives and inductor are included in the module.
At the highest level of power supply integration, a power module at the very least will include passives, field-effect transistors and an inductor in the package (Figure 1). The benefit of a power module is that it eliminates the task of selecting and evaluating the inductor; it also simplifies PCB layout, and you won’t have to design an external compensation network. Wide-input-voltage devices such as the SIMPLE SWITCHER® LMZ36002 are an ideal solution for power-supply design challenges. The SIMPLE SWITCHER LMZ36002 is an integrated power module that combines a 60V, 2A DC/DC converter with a shielded inductor and passives into a low-profile quad flat no-leads package. Ideal for designers seeking higher power density, the SIMPLE SWITCHER LMZ36002 has the smallest solution size in its class, requiring as few as three external components along with a small 10mm-by-10mm package.
Designers can create a custom LMZ36002 design using WEBENCH® Power Designer (see this sample design for VIN = 10V-60V and VOUT = 5V at 2A). To design for other specifications, enter the inputs in the “Change Inputs” section, then click “Submit.” This will create a customized design. You can further analyze the design by performing electrical and thermal simulations and even export the schematics and customized layouts to popular CAD formats once you are ready to prototype your design.
For power designers who want to take their design to the next level, the TI Designs Space Optimized Wide VIN Triple-Output Power Module Reference Design supplies power for a 24V rail and provides three point-of-load output voltages: 3.3V, 1.8V and 1.2V (Figure 2). The reference design features the LMZ36002 wide VIN power module and two LMZ20502 nano modules. The layout is optimized for space-constrained systems, while the total solution size is about 400mm2.
Figure 2: On the left is the TIDA-00783 space optimized reference design. On the right is a high level block diagram, illustrating how three modules are used to step down from 24VIN to point-of-load voltages.
In conclusion, there are multiple ways to design a power supply. However, for those that want to create a design that will not take a lot of time, have limited board space, or just want an easy solution with high power-to-space ratio without sacrificing performance, then a module may be the best solution.
(Source: Irina Strelnikova - stock.adobe.com)
Smart TVs, smart phones, smart homes… today, it seems every piece of technology needs the word “smart” in front of it to be viable. But what about our 50-year-old power infrastructure systems? Do they need to be “smartified” too? The media consensus seems to be “yes”.
Talk of smart grids is everywhere and has been for a few years. But what exactly is a smart grid? What are the risks and what are the rewards? And why should engineers care?
At its most basic, a smart grid is an electrical grid, beefed up with a variety of sensors to measure operational and energy efficiency. All these sensors and measuring tools, combined with information and communication technology, get bundled up into a high-tech system. The high-tech system allows for varying degrees of automation, electronic power conditioning, and control of electricity production and distribution.
This, in turn, translates into big energy savings, which is good not only from a conservation perspective but also from a quality-of-life perspective for those looking to cut back on their electricity bills. Indeed, some estimates claim consumers could save nearly $600 USD per household per year and cut down on energy consumption by 5-10 percent.
Smart grids are not just the purview of governments and giant corporations, either (though the term was first defined in the Energy Independence and Security Act of 2007). Consumers have been jumping on the smart grid bandwagon for at least a decade, by investing in products for their smart homes, from sleek-looking thermostats to solar.
Being able to put sensors around our homes that can alert us to energy inefficiencies is putting power, quite literally, back into our own hands, letting us know not only how much we’re consuming and whether we can be more efficient, but also what we’re paying in real-time. This energy awareness and “two-way” communication allow consumers and producers of electricity to track data and usage and is a huge factor in the smartening of our electrical grids.
The whole system, from energy generation to energy consumption, can now be quantified, measured, analyzed, and tweaked based on the intelligence and data being gathered at various points in the chain. So significant are the benefits that the Department of Energy proposed an investment of $3.5 billion USD for the decade between 2016 and 2026 to improve smart grid technology. Interestingly, out of the top 150 smart grid technology vendors, over 75 percent are US-based.
“Smart grids are important because they help to ensure the reliable delivery of electricity while also reducing emissions and improving energy efficiency,” said engineer Sam Cohen, who is also CEO of Energy Solutions. “What makes them better is their ability to self-heal, isolate problems, reroute power and manage demand through dynamic pricing.”
“Smart grids mean improved efficiency, more reliability in the constant electric supply, integration of the renewable sources to form a network, supporting the usage and the progression of electric vehicles, and more optimization in electricity consumption,” added Steven Walker, a network engineer who believes smart grids are the only grids we’ll have in the future.
According to the electrical engineers at UC Riverside, “a smart grid can help reduce greenhouse gas emissions by up to 211 million metric tons and is much more reliable than a traditional grid.” It also helps to integrate multiple renewable energy sources like wind, hydro, and solar with other more old-school energy solutions, and by doing so, encourages more reliance on these renewable energy sources.
As well as it being good news for the planet and people’s wallets, another helpful factor is that smart grid systems tend to be a lot more robust and are self-diagnosing and self-healing. Indeed, one of the biggest advantages of smart grids is that even when faced with power grid failures, disturbances, or even catastrophes, they can still mostly ensure power supply capacity to users, through smarter allocation based on real-time data.
Instead of a domino effect where an outage could lead to extended blackouts and failures in all kinds of systems from traffic to security to heating, a smart grid system could re-route power automatically and safely to where it’s most needed, minimizing disruption. A good smart grid could have alleviated a lot of the issues seen in places like Texas in the winter of 2021 and 2022, or New Orleans after Hurricane Katrina.
The various sensors and machine learning technology built into a smart grid can continuously monitor and analyze data, changing the behavior of the grid dynamically, based on real-time detection and assessment, which is great news for early warning and preventative control capabilities, for automated fault diagnostics, fault isolation, and system self-recovery capabilities.
In less dramatic situations, smart grids can also highlight and help to eradicate inefficiencies through monitoring and automatic processes which reduce power grid losses and improve energy efficiency. Energy usage trends can be more easily tracked and made transparent (see: why is everyone using their washer/dryer at 6pm on a Thursday night?), and a smart grid could even potentially switch off devices or appliances that shouldn’t be running at specific times, potentially saving consumers a chunk of change. Smart grids are basically a high-tech version of everyone’s dad, going around switching off lights and unplugging appliances.
That same pro is a con to some though, who find it scary and intrusive that a system could have this level of control over things in your home, not to mention the data collection and privacy concerns. Most see it as a small price to pay for the energy and cost savings, however.
Other arguments in favor of smart grids include the fact that they also make it much easier to aggregate and use energy from renewable energy resources, shifting the balance of power away from big, centralized power plants to a more distributed model of microgrids, including smaller energy resources, like wind turbines, solar farms and hydroelectric sources. These do not only meet people’s diversified power needs but also support value-added services.
The usage of smart grids also works to prevent electricity theft and reduces electricity loss in transmission and distribution. There are also fewer maintenance costs and less chance of sudden equipment failures.
On the cons side, replacing analog infrastructure and equipment is costly and time-consuming. There’s also a lack of regulatory norms for technologies in smart grids and a lack of official documentation of the installation procedures, which can lead to confusion and fragmentation. Also, the high-tech nature of the smart grid requires more skilled workers who invariably cost more. Then again, it also creates many more jobs, which is a good thing.
“Design engineers play a critical role in developing and implementing smart grid technologies. As a design engineer, you need to have strong analytical and problem-solving skills, as well as experience with power system analysis and design. The pros of working in this field include the opportunity to make a real impact on the future of energy delivery, as well as the potential for high earnings. The cons include long hours and complex work environments,” said Cohen.
“Engineers who plan to work on smart grids need to possess some skills like MATLAB programming, electric utility analysis, Linux, GIS, outage management, demand response, and infrastructure management,” said Walker, adding that they should also be moderately knowledgeable in all these categories to carry out their tasks without failure.
In addition, smart grid technology will require an uptick in security experts, since the usage of any networked system is prone to security issues like malware attacks.
Despite people’s privacy concerns about how their data is collected and used and the threats of outside attacks on the smart grid system, the technology’s pros still seem to significantly outweigh the cons. And in days when energy and the cost of energy are on everybody’s minds, increasing the efficiency of a system can only ultimately be a good thing.
In the summer of 2010, Shanghai was host to the 41st World Expo with the theme “Better City, Better Life.” This was an international high point for discussions around cultural exchange, social development, and especially, urban development. According to statistics from the United Nations and the World Bank, the percentage of city dwellers to the overall world population in 2010 was 51 percent, marking a historic shift of population from rural to urban centers. From that point forward, countries all over the world began to see the connection between improving the quality of life in cities to improving the lives of their citizens.
Ten years on, the global proportion of city dwellers has increased from 51 percent to 55 percent. Based on forecasts by the United Nations, this percentage will increase to 68 percent by 2050. City life will therefore become the default state of human civilization. The concentration of populations in cities will, on the one hand, bring many conveniences, but on the other hand, will introduce new challenges in housing, traffic, environmental damage, and resource conservation, to name a few. Many hope that emerging technologies can be used to solve these new challenges that are unique to cities, and this has given birth to the concept of the smart city. As part of the smart city’s conceptual framework, new technologies such as the internet, modernized industry, and artificial intelligence (AI) are hoped to be used to integrate city systems and services, boost the efficiency of resource utilization, and optimize city administration and services. This can help solve the problems faced by cities and improve the quality of life of their residents.
The smart city concept has been developing for more than a decade since IBM first proposed it in 2008. Some preliminary smart applications have already become a part of the everyday lives of city dwellers. Mapping software, such as Google Maps, combines geographical data and actual images of the city and uses algorithms to assist users to understand their city and plan specific routes, all from the comfort of their home. Uber in the US and DiDi in China leverage such services and integrate vehicle and user data with their recommendation algorithms to help users quickly catch a ride. In the field of security, China established its Skynet surveillance camera system in 2017, with more than 200 million cameras being put to its service by 2019. Similar surveillance networks are being quickly deployed in other places worldwide, such as the Domain Awareness System jointly built by the New York City Police Department and Microsoft. It consists of a vast number of cameras and sensors and back-end data processing systems that can be used to constantly monitor and swiftly respond to criminal activity.
These examples of smart city applications already use some AI algorithms such as recommendation algorithms, recognition algorithms, and prediction algorithms. But the vast majority of applications are concentrated around data collection, networking, and information sharing—such as e-government platforms, device remote control, and sensor arrays. As AI technology develops alongside the smart cities it is supporting, this data will be further leveraged through such AI functions as inference, prediction, and decision-making.
With the rise of AI technology in 2012, many new technologies based on deep learning were introduced to help meet the everyday housing and transportation needs of urban residents, to help maintain the sustainability of environmental resources, and to help city administrators more quickly get a handle on information and communicate with residents. This has meant greater convenience and efficiency for urbanites.
The biggest contributor to an intelligent transportation system has been the advent of autonomous driving systems. When self-driving vehicles become the main mode of urban transportation, it will guarantee better safety and provide greater efficiency as self-driving vehicles use big data and route-planning algorithms to automatically avoid congestion and automatically find optimal driving routes.
With this future in mind, the research and industrialization of self-driving vehicles are now fully underway. Waymo, a subsidiary of Alphabet, issued a self-driving vehicle safety report in October 2020. The report found that Waymo self-driving vehicles had already driven 24.1 billion kilometers on virtual roads and 32 million kilometers of autonomous driving on actual roads. In the 106,000 kilometers of real road testing over the past two years, only 18 actual collisions and 29 virtual collisions were recorded, and most were the result of other drivers not following traffic rules. This shows that autonomous driving technology is already becoming quite mature and can adeptly handle any simple road situation. However, it also shows that the vision for highly intelligent self-driving vehicles has not yet become a reality.
Although the day when self-driving vehicles can truly replace human drivers has not yet arrived, driving assistance technology and road control technology have already become a part of people's daily lives. Examples are technologies that use sensors, cameras, and control technologies to support features such as automatic reverse and parking and warnings about pedestrians, front and rear obstacles, and lane changes. An on-board computer can change the vehicle's path a few seconds ahead of time because of the comprehensive analysis of vehicle speed, distance, and sensor images, and this is a great boon for traffic safety. In terms of the road itself, AI algorithms have already been put to good use controlling traffic lights. The city of Hangzhou, China, tested its urban data brain on some roads in the Xiaoshan District in 2016. With AI algorithms analyzing vehicle data and road surveillance cameras intelligently controlling traffic lights, the speed of traffic was increased by 3 percent to 5 percent and even by 11 percent in some road sections.
Another important role of smart cities is their protection of the urban environment and the optimization of urban resource allocation. AI can also assist in these areas.
This is true for urban electricity supply systems in particular. Urban electricity grids experience different power loads in different seasons, times of day, weather conditions, and regions. AI algorithms can combine this data with knowledge of electrical power to analyze the electrical grid's operating mode. This makes a data-driven health assessment of the grid that includes equipment status, network topology, and real-time operations possible. The health assessment allows operators to monitor and instantly discover problems in the power supply. Power grid equipment, such as power transmission lines and transformers, can also be more frequently monitored. Field robots collect images of equipment, which are analyzed with classification and integration algorithms to quickly discover any equipment failures—such as loose dampers and missing insulators—and risks from factors such as construction work, overgrown trees, and fireworks.
A network of sensors can monitor the urban environment. Barcelona, Spain, for example, installed more than 20,000 wireless sensors around the city to collect data about temperature, humidity, pollution, noise, and traffic flow. In the future, AI algorithms can run classification and regression analysis on this data to predict pollution, weather, and traffic situations. This will help city administrators take the appropriate measures as quickly as possible.
Garbage sorting is another area that can benefit from AI-based monitoring. Forecasts predict that the amount of trash produced by urban residents globally will increase from the current 2 billion tons a year to 3.4 billion tons in 2050. If this household waste were disposed of in landfills, it would displace billions of square meters of soil every year, which would have a massive impact on the world's environment. Intelligent garbage sorting can replace manual work and achieve superior results. Finland's Bin-e smart trash bins first use cameras to capture images of the trash and then use trained algorithms for image identification and physical object detection to analyze the contents of the bin. A mechanical system is finally used to sort and compress the garbage, while the bin's internal sensors can also notify the user and the waste management company to dispose of the waste promptly.
In smart cities, information is exchanged more efficiently and with greater transparency between city administrators and city residents. Realizing such benefits requires building the necessary data platforms and the deployment of information technology such as blockchains.
Blockchain technology features distributed storage and multi-party maintenance and is impervious to falsification. This ensures that information is valid and genuine and also increases the efficiency of point-to-point information transfers. Blockchain technology can facilitate the application of AI algorithms. Examples are:
These applications depend on the efficient transmission of data guaranteed by blockchain technology.
Blockchains can also be used to protect and share data. In a city's e-government system, city residents can instantly view new or modified government policy, give instant feedback, and see other people’s comments. This will greatly enhance the communication between city administrators and residents. Health data includes various types of private patient information, and medical records are generally only kept on file by hospitals, so they are not easily accessible. With blockchain technology, patients can establish confidential electronic health records that can securely transfer between the patient and hospital in complete form. Blockchains can also help governments and the public respond quickly to sudden public health incidents. In response to the COVID-19 outbreak, the Chinese government introduced a health code system where each person can display their individual health status and see the local exposure risk. Blockchain technology guarantees data security and genuineness for the program, and AI algorithms analyze risk levels. The health code information service system enabled the Chinese government to respond quickly and get control of the pandemic.
Intelligent healthcare has always been seen as a natural direction in the development of AI. Microsoft's Healthcare Bot is a chatbot that harnesses natural language processing and speech-recognition technology so that patients can get diagnosis and triage for simple conditions by talking with a chatbot online. In the field of imaging, Chinese companies such as YITUTech and Deepwise have developed intelligent diagnostic systems based on image classification and segmentation to help doctors quickly find tuberculosis and pinpoint cerebral hemorrhages in computed tomography (CT) scans and magnetic resonance imaging (MRI), which enhances diagnosis efficiency.
Smart-home technology will gradually replace traditional appliances in the home. As the IoT becomes more ubiquitous, everything in the house, from traditional home appliances to curtains, doors, and windows, will connect to the home data brain, and smart-voice controllers such as Alexa and Siri will recognize verbal commands and transmit these to the corresponding household devices. AI algorithms can also analyze daily living routines to control home appliances automatically. Smart devices have already begun to find their way into the daily lives of the average person. Take smart cameras as an example. Devices such as Nanit or Cubo AI (USA) integrate scene segmentation, behavior recognition, and facial recognition algorithms to help parents monitor their child's every move from infancy to childhood. They analyze the sleeping position of infants and provide warnings about dangerous situations such as climbing on the furniture or the detection of obstructions over an infant's mouth and nose.
In residential complexes, residents will enjoy conveniences such as intelligent logistics and unmanned supermarkets. Amazon’s warehouses, which are considered the world’s most efficient, use more than 15,000 robots working in 3D warehouses and logistics centers to convey and sort goods quickly. In terms of unmanned supermarkets, two years after Amazon started up operations with its Amazon Go markets, it opened up even bigger unmanned supermarkets called Amazon Go Grocery in 2020, not only increasing the size of the store but also adding more types of products and increasing quantities. Such well-known unmanned supermarkets combine computer vision, sensor technologies, and deep-learning algorithms to monitor the movement and interaction of multiple physical objects simultaneously. This results in the ability to record in detail images and data about each shopper's activities. Shoppers can simply take products off the shelves and place them in their bags without dealing with item scanning and checkout. Customers receive an accurate bill after they exit the supermarket.
From this overview of smart city application scenarios, it's clear that AI technology has profoundly changed the relationship that people have with information. Data and information from cities train AI technology, and AI prediction, decision-making, judgment, and modeling can be widely applied across smart cities to serve the daily needs of residents better.
The changes brought to smart cities by the application of AI technology do not stop there. Even the city's basic functions are not immune from changes occurring in this area. AI technology, autonomous driving, and the IoT have changed the way connections are made between physical objects and people and between the physical objects themselves. The allocation of resources within a city and between cities no longer relies solely on manual input and labor. This lowers the costs of transporting goods to individual communities in the city. With the rise of 5G technology and shared office spaces, more and more people will be able to work and handle their affairs near where they live. Cities can naturally develop toward having multiple centers of activity, and each center can become a multi-purpose community that does not have to be either exclusively a residential or commercial area. This lowers the overall costs of getting around the city and also naturally reduces carbon emissions.
Changes will also occur to the types of occupations that the people in cities will be working in. AI technology will handle garbage sorting, traffic control, driving, and checkout, freeing up an abundance of human resources. Meanwhile, these AI technologies will also require large-scale data collection and continuous model training, triggering a need for more data engineers, sensor hardware engineers, and AI engineers. People who have a strong grasp of AI technology will be in great demand as AI is deployed in all sorts of fields such as healthcare, education, information management, construction, and real estate.
Of course, this kind of ideal smart city will not just spring up overnight, nor is it something that can easily be brought into being through top-down planning. AI technology develops cyclically, so city administrators should develop short- and long-term development plans. In the short-term, city administrators should support AI businesses that use deep-learning-based AI technologies to create applications in areas such as transportation, healthcare, and power, thus jointly forming intelligent infrastructure from the bottom up. In the long-term, AI technology will likely see revolutionary advances soon, yet information and data will always be inseparable from it. Therefore, administrators of future smart cities should digitize all city administrative functions and all city-related data. Such digitization will mean that cities will have a virtual replica of the physical city, allowing simulation for urban planning and forecasting for potential incidents. Digitization also builds the data foundation for further application of AI technologies, and it will provide advanced tools for urban planning and city construction.
In addition to AI technology, building smart cities will require developments in other basic technologies. One example is 5G technology, which is set to make a long-lasting impact. It transmits data at speeds that are 20 times faster than what 4G technology can handle, and it supports the simultaneous transmission of data from many different communication devices. The enormous input of data required by AI algorithms can be transferred to the cloud, processed, and instantly returned. This allows for the use of lightweight smart devices that do not need complicated processors. Meanwhile, connecting as much infrastructure equipment as possible to a smart network can finally achieve the Internet of Everything (IoE). Newly installed smart devices can also further promote the city's digitization so that its digitization and smartification can move forward in tandem.
Smart cities will still have some limitations. Massive differences in histories, cultures, planning, and management between cities mean that experience might not be readily replicable. For example, China will need to consider its very high population density and historic landmarks. In contrast, Australia would need to deal with the significant differences between coastal and interior cities. AI algorithms are always influenced by the data they rely on, and the process and results of their work reflect the prejudices of the data source to some degree or another. This requires city administrators and social workers to supervise the algorithms and the data collection to ensure that the results are fair for all segments of society. City residents will also have to relinquish some of their data privacy to enjoy the convenience provided by these algorithms. Therefore, the use of this private data will have to be protected by rigorous standards for data management. The actual environment of the city itself will also be a factor that limits the scope of its development. This means that, while developing big cities, governments should also place importance on building up remote regions and rural areas so that all population centers can enjoy the conveniences brought by AI technology.
Smart cities offer city residents the dream of fast and convenient city life, smart and efficient, and full of hope. This future certainly requires the helping hand of AI technology. Building smart cities will not be something that happens overnight. As AI technology is embedded in cities, residents will be gradually introduced to new concepts and new lifestyles that will not necessarily be easy to accept right away. However, the benefits to human civilization offered by this next great technological revolution are worthwhile.
The power of eyewear has come a long way since its inception. The first eyeglasses were invented in Italy in the late 13th century, revolutionizing the way people with vision impairments interacted with the world. These early glasses were simple convex lenses mounted on frames primarily used to correct farsightedness. Over the centuries, eyeglasses evolved, with improvements in lens technology and frame design enhancing both vision correction and comfort.
Now, what we can expect from a pair of lenses goes far beyond vision correction. The concept of smart glasses marked a significant leap in eyewear technology. Leading the way was Google Glass, or simply Glass (Figure 1), which was introduced in 2013. Glass was one of the first to merge traditional eyeglasses with modern technology. When released, Glass resembled something more like what “The Borg” would wear, for those Star Trek aficionados, displaying information for the user on a head-up display (HUD) much like what you find in many of today’s vehicles.
Figure 1: Google Glass can be controlled using the touchpad built into the side of the device. (Source: https://commons.wikimedia.org/wiki/File:A_Google_Glass_wearer.jpg)
Glass's journey unfortunately didn't align with consumer readiness and market expectations, leading to its decline. In short, consumers were not ready for Glass. However, the evolving integration of advanced technologies is now fueling a renewed interest in the smart glasses sector.
Fast forward to today, and despite the setbacks faced by Google Glass, smart glasses have evolved into more practical and stylish wearables. Companies like Ray-Ban and Oakley have entered the market, focusing on aesthetics and functionality. This interest can be attributed to advancements and fusions in technology that have allowed for more stylish and less obtrusive designs, potentially overcoming one of the significant hurdles faced by Google Glass. Furthermore, there's a growing interest in wearable technology as it becomes more integrated into daily life.
Additionally, advancements in augmented reality (AR) and artificial intelligence (AI) could transform how we interact with our environment, offering real-time information overlays and immersive experiences. The vast potential for medical, educational, and business applications indicates that smart glasses may eventually become prevalent in our daily lives.
Today's smart glasses are not only fashionable but also significantly more functional than their predecessors. Smart glasses are being designed for portability and daily use to enhance and interact with the real world. With smaller displays integrated into the lenses, they can overlay digital information without obstructing the user’s vision when displaying notifications, navigation, or camera functions. Also, smart glasses are generally more lightweight and designed to be worn like regular glasses, making them more suitable for continuous wear and everyday activities.
Meanwhile, AR/VR smart glasses continue to be bulkier, as their application is not intended for use while moving around or performing other tasks. These smart glasses are primarily designed for immersive gaming experiences, offering a fully virtual environment that replaces the user's real-world surroundings with a wider field of view. AR/VR smart glasses isolate the user from their physical environment, while smart glasses are designed to interact with and augment the real world.
Unfortunately, there are privacy concerns surrounding smart glasses, which in part affected the success of Google Glass, and these issues have not necessarily been resolved. Smart glasses present unique privacy concerns compared to other technologies, such as smartphones. They can record audio and video more discreetly, that is, without the visible actions required by smartphones, such as holding up the device. This discretion makes it difficult for others to detect when they are being recorded. Additionally, smart glasses can continuously capture data while worn. Although some smart glasses have security features like file encryption, these do not fully address the issue of covert recording in public or private spaces. Furthermore, while the public is generally aware of smartphones' recording capabilities, smart glasses are newer and less understood, leading to heightened privacy concerns.
This week, we highlight two innovative components from FRAMOS and PUI Audio, renowned for their dedication to quality and forward-thinking design. These components represent the pinnacle of modern technology, meticulously engineered for the emerging field of next-generation wearable devices, including advanced smart glasses.
The FRAMOS Sensor Module (FSM) featuring the Sony IMX296 sensor is a compact, high-performance module measuring just 26.5mm x 26.5mm. It is equipped with a Global Shutter sensor, offering a 1.6MP native resolution and a 1/2.9 optical format, with pixel precision at 3.45 x 3.45μm. The module supports a MIPI CSI-2 interface with up to 1-data lane capacity. Designed for seamless integration into various processing platforms, these modules demonstrate remarkable modularity, utilizing standardized connectors and mechanical parts. They encompass a resolution spectrum from 0.4MP to 24MP, with options for both rolling and global shutters, addressing a broad range of imaging needs. Ideal for sensor evaluation in early-stage design, the FSM facilitates comparative analysis and is easily integrated into third-party processor boards, enhancing its utility in diverse technological applications.
Design engineers focusing on smart glasses development can benefit significantly from the FSM-IMX296 Sensor Module with these additional key advantages:
The PUI Audio Piezo Haptic Benders, comprising three distinct models, offer significant advantages for innovative wearable designs, including smart glasses. The AB1270A-LW100 model is notable for its high-temperature resistance, enduring extreme conditions from -40°C to +85°C, making it suitable for wearables exposed to harsh outdoor environments or used in automotive settings. Meanwhile, the HD-PAB2001-LW100 and HD-PAB2701-1 models stand out with their low-profile design combined with high voltage and displacement capabilities, catering to demanding applications like transmission systems and medical devices such as blood pressure or insulin pumps. These versatile haptic benders, compliant with RoHS/REACH standards, are ideal for integration into wearable applications, offering robust performance in various conditions.
The evolution from traditional eyeglasses to smart glasses showcases remarkable technological and design progress. Google Glass, despite its initial setbacks, catalyzed renewed interest in this domain. Modern smart glasses, leveraging augmented reality and artificial intelligence, blend style with functionality, marking a significant leap in wearable technology. However, privacy issues, notably around discreet recording capabilities, persist as a major challenge. Addressing these concerns is essential for the broader acceptance and integration of smart glasses into daily life.
In this evolving landscape, suppliers like FRAMOS and PUI Audio are playing a pivotal role in the development of next-generation wearables.
FRAMOS. “Sensor Modules Help Accelerate Embedded Vision Development.” February 28, 2019. https://www.framos.com/en/articles/sensor-modules-help-accelerate-embedded-vision-development.
Mukhiddinov, Mukhriddin, and Jinsoo Cho. “Smart Glass System Using Deep Learning for the Blind and Visually Impaired.” Electronics 10 (22): 2756. https://doi.org/10.3390/electronics10222756.
A Smart City, like any organization, depends on three basic processes:
The technology to accomplish this has multiple layers:
Figure 2: A Smart City network has multiple layers, each with a different function and response time. (Source: ASE International Conference on Big Data)
End nodes gather information from a multitude of sensors, cameras, and other devices located on buildings, lights, parking spaces, sewer pipes, electricity meters, dumpsters, and many other locations. For sending information upstream, wireless communication holds the most promise for the future. Depending on the specific requirements, WiFi, Bluetooth Low Energy (BLE), ZigBee, and LoRa are popular wireless options. Each one has its own combination of strengths and weaknesses.
Gateways communicate with the thousands of end nodes, aggregate the data and send it to the cloud, often over a fiber-optic backbone. The gateways must be able to handle the diversity of end-node protocols, and perform analysis as needed.
The Cloud Layer processes the data streams to control the operation of the various functions, plus gathers data to identify longer-term opportunities for improvement (data analytics). The cloud may extend downstream to the gateways, a strategy Cisco has called “fog computing.” As shown in Figure 2, this approach processes time-sensitive data (area traffic light control, for example) in the most efficient location, which reduces the burden on the network backbone.
The Smart City infrastructure requires the integration of a diverse set of technologies into a seamless whole, and suppliers are working with cities to help the vision become a reality. Intel, for example, is involved in multiple projects worldwide that range from smart kiosks in Eindhoven, Holland, to a network of sensors that helps residents of San Jose, CA, reduce air pollution.
These projects illustrate the breadth of solutions that must be brought to bear for a successful deployment. The Eindhoven kiosks, called Smart Beacons, allow both tourists and residents to interact with local businesses, plus access weather, news, and entertainment information. Smart Beacons feature two 55-inch HD screens and a 32-inch screen, plus free Gigabit+ WiFi. They rely on several Intel products, such as a 6th Generation i5 (Skylake) processor and an Intel solid-state drive (SSD).
The Intel CoreTM i5-6500 14nm desktop processor is a good example: It’s a 64-bit quad-core machine built on 14nm technology that runs at a base frequency of up to 2.3GHz and includes options such as AES encryption and an integrated graphics processor with 4096x2304 resolution.
For a representative SSD, consider the Intel e6000p: It’s a 16GB SSD based on Intel‘s Optane memory module that features typical read and write latencies of 8.25μs and 30μs, respectively.
Visit Mouser’s IoT application and technology site for more information about connectivity, networking, and related products.
Visit the Shaping Smarter Cities Homepage to learn more about Mouser's commitment to Empowering Innovation.
With Matter, smart-home devices will work well together and the true potential of connected tech will be realized. (Source: AndSus - stock.adobe.com)
The dream of the smart home—an automated dwelling that cossets its occupants in a warm blanket of technology—remains just that. But we might not have to wait too much longer for easily accessible, reliable, and crucially, interoperable connected-home products, which is a welcome relief because the smart home’s potential has been touted for much longer than you might think.
Science fiction aside—which nearly a century ago had robots helping with household chores and homes that continued to operate even though the occupants were long gone—American Jim Sutherland was among the first to attempt wide-scale automation. A Westinghouse power station engineer by day, Sutherland designed the Electronic Computing Home Operator (or ECHO IV) in his spare time during 1966. The machine managed the Sutherland family home accounts, calendar, air conditioning, and TV antennas, among other tasks. The phrase “smart home”—coined by the American Association of House Builders in 1984—is only a little younger than ECHO IV.
Yet, in 2022, mainstream smart-home adoption remains elusive. While the shipment numbers of connected home devices—think smart-speakers, -lights and -thermostats—number in the billions, they tend to be purchased by early tech-adopters. Analyst Statista1, for example, claims that just 14.2 percent of homes across the globe have embraced smart-home products.
The slow take-up is caused by complexity. Today, it’s almost impossible to walk into a store and walk out with a range of smart-home products that play nicely together. Even tech-savvy buyers struggle to get their smart-home products working. For example, early-adopters find a digital voice assistant from one manufacturer often falls over when trying to configure and control smart lights or an air-conditioning system built by another vendor. Without an informed choice of technology and smart-home ecosystems—such as Apple’s, Amazon’s, or Google’s—consumers seem to be forever toiling to keep finicky equipment connected. The average consumer has no chance. And neither does a realization of the fully-integrated smart home.
While around 14 different connectivity standards are vying for a share of the smart-home sector, BLUETOOTH® Low Energy (Bluetooth LE), Wi-Fi®, and Thread are forging ahead. But that’s little help to consumers because even these mainstream RF protocols are not interoperable.
Realizing that no single wireless connectivity standard is ever likely to emerge, the tech industry has come together to find an engineering solution that promises harmony. The 400-plus member group these companies have formed is called the Connectivity Standards Alliance (CSA). As of October 2022, the organization has announced the release of Matter 1.0, a smart-home protocol that promises to straighten out the current tangle of wireless connectivity.
Rather than introducing a competing standard, Matter complements the existing smart-home technologies of Thread and Wi-Fi (plus the Ethernet-wired protocol). Thread is a popular low-power protocol suitable for devices like thermostats and smart lights, while Wi-Fi supports higher-bandwidth products such as entry cameras. Bluetooth LE support is included primarily because of its interoperability with smartphones—thus allowing consumers to use their mobiles to commission and configure their new smart-home gadgets. For the technically minded, Matter adds a unifying application layer to the Wi-Fi, Thread, and Bluetooth LE protocol stacks that manufacturers can leverage to bring compatibility and interoperability to their products.
But perhaps more importantly, Matter promises simplicity for consumers. Instead of having to work out if a thermostat is Apple compatible, or if Google smart speaker can control a Yale smart lock, buyers can check for the Matter certification that ensures interoperability. And for manufacturers, the product development process is made easier because they can use a single standard for all their products, safe in the knowledge they’ll work with all major smart-home ecosystems.
Who’d have thought that fierce competitors like Apple, Amazon, Google, and Samsung would even sit around the same table, let alone work closely together for years on a solution to the smart home’s stultifying complexity? Cynics said it would never happen, and for a long time, it seemed they were right—for example, the Matter 1.0 standard faced several lengthy delays before it was adopted.
With some fanfare, the project was originally announced in 2019 as Project CHIP, and the standard was planned for release in late 2020. That was delayed into early 2021. Then in August 2021, following the rebrand to Matter, the standards release was pushed to mid-2022. Finally, because of problems with the Matter software development kit (SDK), Matter was released in late 2022.
The good news is that collaboration continued behind the scenes during the delays, and chip makers and end-product manufacturers worked hard on their hardware and software solutions ahead of the official launch. Because of that background work, today it’s possible to purchase Matter chips from a selection of silicon vendors mere weeks after the standard was been adopted. In addition, the certification labs are up and running, the SDK is available, and companies are lining up for Matter certification of their smart-home devices.
Now that Matter is here, manufacturers will be able to put much less effort into patches and workarounds to ensure their products work with others and focus more on innovation, security, and quality. In a decade or less, the smart home will be commonplace in the developed world, and it will be much more than a place where our voice controls the lights or a smart thermostat looks after the heating. Instead, Artificial Intelligence and machine learning will fine-tune automation so that energy bills drop, the electric car is only charged for the short journey it knows you’re taking tomorrow, the media room lights are set for movie night, and some paracetamol has been automatically ordered and delivered because your wearable has detected signs of an impending chill.
High-side SmartFETs have grown in popularity due to their ease of use and the high level of protection. Like standard MOSFETs, SmartFETs are ideal for various automotive applications. What separates them is the control circuitry built into a high-side SmartFET device. The control circuitry constantly monitors output current and device temperature while offering passive protection against voltage transients and other unexpected application conditions. This combination of active and passive protection features ensures a robust application solution, extending the lifetime of both the device itself and the application load it is protecting.
onsemi now offers a family of high-side SmartFETs ranging from 45mΩ up to 160mΩ. The devices are protected, single-channel high-side drivers that can switch various loads, such as bulbs, solenoids, and other actuators. As shown in Table 1, the device name indicates the typical RDSOn of the SmartFET at 25°C. The complete family of products is listed below:
Table 1: The Complete Family of onsemi High-Side SmartFETs
onsemi’s family of devices is housed in an SO8 package, providing a small footprint while delivering high power. A family pinout for the 45mΩ to 140mΩ devices provides convenience to the designer, allowing one pinout for various application load uses. Simply switch out one device for another depending on the level of current required for a given application. The devices drive 12V automotive grounded loads and provide protection and diagnostic capabilities. The family of devices incorporates advanced protection features such as active inrush current management, over-temperature shutdown with automatic restart, and an active overvoltage clamp. A dedicated Current Sense pin provides precision analog current monitoring of the output and fault indication of short to battery, short circuit to ground, and ON and OFF state open load detection. All diagnostic and current sense features can be disabled or enabled by an active-high Current Sense Disable pin (NCV84160 only), or an active-high Current Sense Enable pin (all other parts in the family).
The “end requirement” from a high-side SmartFET is to switch loads, and there are different alternatives available in the market towards that end. Relays, for instance, have been used for a long time in the industry to switch various automotive loads, especially those requiring high current activation. With a continual reduction in the weight and size of automotive components and assemblies, there has been a transition from relays to semiconductor switches that take up less area and offer improved noise immunity and lower electromagnetic interference than relays.
High-side SmartFETs have become the dominant SmartFET configuration in the automotive market, replacing the generally simpler low-side SmartFET. Figure 1 shows an example of a high-side versus low-side SmartFET configuration. While a high-side SmartFET’s load always connects to the ground with a switched connection to the supply, a low-side SmartFET’S load always connects to the power supply with a switched connection to GND. The SmartFET is typically housed inside a control unit or ECU. The load line is the cable length that connects the load to the pin connector on the ECU. Depending on the load type and its location in the vehicle, this load line could belong, thereby increasing the likelihood of a short to chassis ground which could be a severely stressful condition for the load in a low-side SmartFET configuration. This makes the high-side SmartFET the preferred choice for load switching.
Figure 1: High-Side Versus Low-side Switch in an Application (Source: onsemi)
Figure 2 below shows the top-level block diagram and pin configuration of the NCV84xxx high-side SmartFET family from onsemi. Notice that the high-side SmartFET is, in fact, an NMOS FET, with a Regulated Charge Pump pulling the gate voltage up high enough to drive the load.
The Input (IN) pin is a logic level pin that turns the control logic/charge pump on and off to operate the FET. The Current Sense Enable (CS_EN) pin enables and disables the Current Sense feature. The Current Sense (CS) pin allows proportional load current sensing to be fed back to the microcontroller for real-time feedback. This pin is multiplexed; it reports analog fault events, which are easily distinguishable from normal operation, giving the user the capability of sensing the output current or a fault condition in real-time. The voltage (VD) pin connects directly to the battery or power supply, and the OUT pin connects to the load. Finally, the ground (GND) pin is simply the device GND.
Figure 2: Block Diagram and Pin Configuration of NCV84xxx (Source: onsemi)
The NCV84xxx SmartFET family of devices offers the following protection features:
Figure 3: How TJ Progresses During Short to GND/Overload (Source: onsemi)
For in-depth information on how the high-side SmartFETs operate, including protection functions, current sensing, etc., please refer to onsemi’s application note—High-Side SmartFET with Analog Current Sense Application Note.
The High-Side SmartFET Drivers for Automotive Load Applications blog was first published on onsemi.com and was reposted here with permission.