Sensor fusion – more than just the sum of its parts!

April 11, 2016 // By Hannes Estl
Many cars on the road today, and even more new cars in the show rooms, have some form of advanced driver assistance system (ADAS) based on sensors like cameras, radar, ultrasound or LIDAR.

These numbers will continue to increase as new laws are passed, for example, mandatory rear-view cameras in the United States). Also, insurance discounts and car safety ratings from agencies like the National Highway Traffic and Safety Administration (NHTSA) and European New Car Assessment Program (Euro-NCAP) are making some systems mandatory or increasing customer demand for them.

Autonomous car features like valet parking, highway cruise control and automated emergency brake also rely heavily on sensors. It is not just the number or type of sensors that is important, but how you use them. Most ADAS installed in cars on the street today are operating independently, meaning they hardly exchange information with each other. (Yes, some high-end cars have very advanced autonomous functions, although this is not yet the norm). Rear view cameras, surround view systems, radar and front cameras all have their individual purpose. By adding these independent systems to a car, you can give more information to the driver and realize some autonomous functions.  However, you can also hit a limit on what can realistically be done – see figure 1.


Figure 1: ADAS added as individual, indepenent functions to a car.

Sensor fusion

Individual shortcomings of each sensor type cannot be overcome by just using the same sensor type multiple times. Instead, it requires combining the information coming from different types of sensors. A camera CMOS chip working in the visible spectrum has trouble in dense fog, rain, sun glare and the absence of light. Radar lacks the high resolution of today’s imaging sensors, and so on. You can find strengths and weaknesses like that for each sensor type.

The great idea of sensor fusion is to take the inputs of different sensors and sensor types and use the combined information to perceive the environment more accurately. That results in e better and safer decisions than independent systems could do. Radar might not have the resolution of light-based sensors, but it is great for measuring distances and piercing through rain, show and fog. These conditions or the absence of light do not work well for a camera, but it can see color (think street signs and road markings) and has a great resolution. We have one megapixel image sensors on the street today. In the next few years the trend will be two and even four megapixels.

Radar and camera are examples of how two sensor technologies can complement each other very well. In this way a fused system can do more than the sum of its independent systems could. Using different sensor types can in addition offer a certain level of redundancy to environmental conditions that could make all sensors of one type fail. That failure or malfunction can be caused by natural (such as a dense fog bank) or man-made phenomenon (for instance spoofing or jamming a camera or radar). Such a sensor-fused system could maintain some basic or emergency functionality, even if it lost a sensor. With purely warning functions, or the driver always being ready and able to take over control, a system failure might not be as critical. However, highly and fully autonomous functions must allow adequate time to give control back to the driver. A minimum level of control needs to be maintained by the control system during this time span.