Maps for autonomous vehicles

October 21, 2016 // By Ben Peters
Driven by an explosion in in consumer digital services, the most profound of which is set to be autonomous vehicles, cartography is fast evolving. Traditionally, maps have been diagrams of physical features like the elevation contours of land mass, water, buildings and roads, but today’s maps are becoming much more complex, now comprising of multiple layers of data, each containing specific geospatial and temporal content on top of the raw physical representation.

Of course, to derive high value from a map, a user must determine its position on it, a task known as localization. There are several ways a user might localize themselves, ranging from a simple visual comparison of the scene compared with a prior image of that same scene through to more sophisticated techniques. 

Google’s introduction in 2007 of visual imagery to enhance its maps in its StreetView product allowed such a simple comparison for consumer map localization. Crowd-sourced alternatives, such as that supported by startups like Mapillary, share the same aim.  More recent work by Here (previously known as Navteq before being acquired by Nokia), TomTom and Google (again) has focused on how to automate accurate localization for machines, specifically autonomous vehicles.

Alongside several university research groups and startups, those companies have begun the task of a surveying, processing and curating wide-scale high-definition 3D maps, where visual imagery and 3D point-cloud data from LIDAR sensors are combined to create a highly-accurate geometric fingerprint of the world.  Such prior HD maps offer the prospect of machines now being able to localize themselves within those maps down to centimeter-level accuracy. The accuracy and reliability of such an approach is certainly better than GNSS, but has its own problems that we’ll come to later.

Having localized on a prior HD map, a user can determine the optimum path from that location to a planned destination, a task simply referred to as routing. Of course, electronic 2D maps have been available as a prior map for GNSS satellite navigation systems for decades and are regularly used for routing from A to B.  Should GNSS units fail or exhibit inaccuracy, various SLAM (simultaneous location and mapping) techniques can be adopted to derive a coarse approximation for location on a route sufficient for navigation, thus providing continuity of service. Historically however, determining on a video feed which pixels represent sidewalk and which represent road is one example of the type of problem computer vision technology used in SLAM has been grappling with.