Making maps evergreen with deep learning, robots and computer vision - Naver Labs Europe
loader image
A system that correctly detects when places have changed to automatically update complex indoor maps. Dataset available for research.

Shops and restaurants are forever closing down in one place and opening in another which means the maps that locate them need to be updated just as often. In some countries, like South Korea, as much as up to a third of places change every year.
Taking care of these map changes is typically slow and expensive so finding a way for them to automatically ‘self-update’ makes sense to both the map readers and the map providers. At NAVER LABS we’ve been investigating exactly how to do that.  Using robotics, computer vision and deep learning technology our self-driving robots analyse the indoor space data, recognize the Points-of-Interest (POIs) that have changed and update the map..

Artificial intelligence for automatic map updates

The map update technology was initially tested in two large shopping malls, but the approach is just as applicable to the outdoor. The malls where the experiments were conducted are places where new shops appear or change most frequently.  The self-updating map technology selects only the shops that have changed in a wide and complex indoor space.

self-updating map technology
We first collect geo-localized images inside the shopping mall using an autonomous robot. This process is repeated on a regular basis to get a snapshot of each location over time. Second, we compare pairs of images that show the same location at different times, applying deep learning to determine if a change has occurred. We also need to determine whether the change detected is real and not a false-positive due to misleading input such as advertisements in the shop windows (of which there are many!)
geo-localized images -training

POI Change Detection in large shopping centers

The algorithm we developed correctly recognized in each shopping mall what places were new, those that had disappeared, changed or remained the same while even if the shop window had been modified. Combining computer vision, deep learning technology and autonomous driving robots we believe we’ve found a way to efficiently manage large amounts of POI information and privide the highest level of map accuracy.


Self-driving robots are not yet widespread but it will not be long before many people will spend time in spaces where will robots coexist with them. These robots will offer a variety of services such as delivery, security and guidance and they will all be using self-upgrading map technology to keep their indoor map information up-to-date.

A paper at CVPR’19

The technology developed is being presented at the Computer Vision and Pattern Recognition (CVPR) conference in California, USA in June 2019. The paper, entitled ‘Did it change? Learning to Detect Point-of-Interest Changes for Proactive Map Updates‘, is available here.

In a nutshell, our approach builds on the deep metric learning framework. We specifically want to train a deep network to predict whether two input images are similar or not. The notion of similarity used here is highly flexible. In fact, it only depends on the exemplar similar and dissimilar image pairs that the network sees during training. We basically tag image pairs that show a change or different locations as ‘dissimilar’, and other pairs showing no change as ‘similar’. In the end, the network learns by itself to distinguish what characterizes a true POI change. In the paper, we’ve evaluated different mathematical formulations of the deep metric learning problem and showed that, even if all of them significantly beat standard baselines, some work better than others. For example, the trained network can correctly classify the first three examples below as ‘POI changes’ and the last one as ‘no change’:

street view data image
Releasing the Mallscape dataset

Given the widespread availability of street-view data, we were somewhat surprised that, to the best of our knowledge, there hasn’t yet been any focus on this exciting topic. One potential reason for this absence is the lack of an appropriate benchmark. Despite the face that many datasets related to localization and landmarks recently emerged, none of them provide the information related to both POIs and time-stamps.

To foster research in the area we’re releasing the Mallscape dataset used in our experiments. It is composed of more than 33K images, captured in two large malls. Each image comes with a precise 6 degrees-of-freedom  (DoF) localization pose obtained by the robot using LIDAR. To mimic real capture conditions, we ‘ve acquired data in a rather unconstrained way. Detailed statistics about the two splits of the dataset are presented below:

datasets stats image
Overall, the Mallscape dataset fully addresses all possible POI change scenarios. The two splits can be downloaded at Mallscape-A and Mallscape-B. The evaluation code to reproduce the paper results can be downloaded here.