Touch. Taste. Smell. Vision. Hearing. The human brain continuously takes in these sensory signals, processes them, and fuses them into a whole that is more than just the sum of the parts. Engineers around the world are working to develop an artificial form of that same sort of sensor fusion in order to enhance the robustness of future autonomous vehicles.
Senses and Sensors
When we sit down to a meal, the appeal of that food is affected by far more than our taste buds. If a prime cut of steak were boiled into a grey slab, even if the taste were not affected, the visual signals to our brain would render it less desirable than if it had been seared over an open flame. No matter how well it might be prepared, if your sinuses are clogged from a cold, a plate of curry just doesn’t taste as good. The crunch when you bite into a fresh carrot stimulates your ears and your sense of touch in your mouth, but the same root steamed into mush has a totally different impact.
Since the 1970s, engineers have been steadily adding sensors to vehicles to monitor wheel speeds, airflow into the engine, engine knock, roll rates, distance to other vehicles, and more. Each sensor was added to enable a specific function, but over time, as engineers became confident in the reliability of the sensors, they built on that functionality. The first modern step toward the autonomous systems that are now being tested were the Mercedes-Benz/Bosch anti-lock braking systems from 1978.
Forward-looking radars and cameras enable adaptive cruise control and lane departure warnings. Side-looking radar and ultrasonic sensors power blind spot detection, cross-traffic alerts, and active parking assist. Today, each of those functions operate largely independently at different times. The automated highway driving assist systems coming from Tesla, General Motors (GM), Toyota, and others in the next 2 years merge those signals and functions into more comprehensive control systems that enable the driver to go hands-off in certain conditions. Navigant Research’s Autonomous Vehicles report projects that the majority of new vehicles will have at least some degree of automated driving capability by the mid-2020s.
This is made possible in large part by fusing these previously disparate signals to harness the advantages of each sensor type, producing a more cohesive view of the world around the vehicle. Radar sensors are useful for measuring distance and speed to another object, but not for recognizing the nature of that object. Digital camera images can be processed to distinguish pedestrians, animals, objects on the road, and signs while lidar sensors can produce remarkably detailed 3D maps of the surroundings. Vehicle-to-X (V2X) communications provide additional real-time information about what is happening even beyond the line of sight of the driver and sensors. These and other signals can be merged into a comprehensive real-time moving image that the vehicle can navigate through with a high degree of precision.
T-U Automotive Detroit
Experts and practitioners in the fields of telematics, autonomous systems, and mobility will be coming together at the T-U Automotive Detroit conference, June 3–4, 2015 in Novi, Michigan to discuss sensor fusion and many other related topics. Anyone interested in attending can save $100 on the registration fee at www.tu-auto.com/detroit/register.php by using the promotional code 2693NAVIGANT during checkout.