(Source: Andrey Suslov/Shutterstock.com)
Imagine that you are driving your car and approach a pedestrian crossing. You observe posted signage and perhaps look for a crossing patrol. If you are following another vehicle, you are aware that it might stop. Once at the crossing, you glance left and right for pedestrians who look like they intend to cross. You check for pedestrians already in the crosswalk, and you are ready to yield to pedestrians crossing or about to cross. When it is safe to do so, you proceed through the crossing.
This scenario demonstrates situational awareness, a characteristic of human intelligence that refers to being aware of, and able to respond to, our surroundings. In autonomous vehicles (AVs), vision systems, high-end processing, and neural networks enable vehicles to perceive their surroundings, plan actions, and respond to changing stimuli, but can we achieve human-level situational awareness in these systems? A look at advances in neural networks reveals the potential for and limitations to reaching this goal.
Continuing the driving scenario, a human driver first perceives the situation, which requires understanding the objects and people involved and their situations and potential movements. We detect crosswalk signage and signals, cars ahead and behind, pedestrians, and other variables in the scene. We also notice more subtle cues, such as pedestrians’ types and styles (such as younger, older, aggressive, drunken, in a hurry), their situations (such as involvement in other activities, alone or in a group), and their intention to cross. Throughout this process, human drivers simultaneously ignore input irrelevant to the situation, such as a bird sitting on the stop sign or litter on the side of the road.
In human situational awareness, a delay between perception and action leads to a choice based on broader experience: This is the difference between expected and actual outcomes. Drawing on experiences creates imaginative scenarios that help us assess potential risks and determine actions in our current situation. In other words, humans remember not only the output of previous actions versus expectations but also alternative scenarios they might have imagined. In the driving scenario, humans can imagine what could go wrong in the situation and consider pedestrians’ viewpoints as part of the decision-making process.
The ability to imagine potential outcomes and perspectives demonstrates what researchers are discovering—namely, to embody intelligence requires an entire integrated system of senses, perceptions, and the different parts of the brain that work together to adapt to situations. This corporeal basis of human intelligence unfolds an affective horizon that provides context for and orients perception, decision-making, and action.
Several technologies enable some aspects of situational awareness in artificial intelligence (AI) systems. In AVs, sensors, sensor fusion, and high-end processing, for example, have enabled vehicles to perceive a scenario and derive a semantic description of the traffic situation. They do this by constructing a representation of the vehicle’s environment, then splitting that representation into cells. A combination of hybrid sensor approaches, knowledge-based inferencing, heuristic algorithms, Bayesian reasoning, fuzzy logic, and neural networks creates an overall estimation of what a human driver would perceive.
To replicate decision making in situational awareness, AI systems can be augmented with local optimizations, approximate reasoning, and neural networks that simulate expectation based on previous training. In terms of replicating human intelligence, the neural network captures the brain’s structural connectivity and enables contiguity between input characteristics and their evolution over time.
Advances in mimicking the human brain’s functional connectivity allow dynamic cooperation of different parts of the brain, including between high-level, complex structures, and wired neural structures. Enabling this cooperation among high-level and neural structures leads to an open system of meaning creation and reasoning akin to human decision making in situational awareness. However, cooperation among different brain structures requires superstructure connectivity, called effective connectivity, to capture the influence that one neural system exerts over another over time. Without effective connectivity, an AI system cannot prioritize inputs based on their importance—a prioritization required to make decisions and take the correct action.
Human situational awareness refers to our awareness of and response to our surroundings based on our perception of the whole scenario and ability to draw on a range of interconnected experiences—actualized and not—to make decisions. Even if neural networks evolve to capture more complex situational awareness, they will never truly match human intelligence. The right synergy between AI and humans exploits the strengths of AI while enabling humans to increase their situational awareness and affect situational control. In turn, this will make machines more intelligent (in their way) and allow humans to focus their attention and energy on more creative tasks.
Constantin Thiopoulos is a technology commercialization expert specializing in Artificial Intelligence and co-founder of spin-offs from leading R&D organizations. He has been an innovation management consultant of the German Research Centre of Artificial Intelligence and IT consultant for several European companies. He has a Ph.D. in Artificial Intelligence, an MSc in Computer Science, an MPhil in Philosophy/Linguistics and has been a lecturer and guest professor.
Privacy Center |
Terms and Conditions
Copyright ©2021 Mouser Electronics, Inc.
Mouser® and Mouser Electronics® are trademarks of Mouser Electronics, Inc.
All other trademarks are the property of their respective owners.
Corporate headquarters and logistics center in Mansfield, Texas USA.