Using the KAPTURE pipeline and robust R2D2 method, we ranked 1st, 2nd and 4th in the VisLocOdomMap workshop challenge.
Martin Humenberger |
2020 |
Using the KAPTURE pipeline and robust R2D2 method, we ranked 1st, 2nd and 4th in the VisLocOdomMap workshop challenge.
For an autonomous robot or vehicle, knowing its exact location is fundamental to many tasks, especially one that requires it to either go somewhere or know where it’s been. One way for an autonomous robot to locate itself is through vision. This task is called visual localization, and it’s an active area of our AI for robotics research at NAVER LABS Europe.
This year, VisLocOdomMapCVPR2020 (Joint Workshop on Long-Term Visual Localization, Visual Odometry and Geometric and Learning-based SLAM at the 2020 Conference on Computer Vision and Pattern Recognition) issued three visual localization challenges to advance research on the topic.
We’re proud to announce that the entry of our team at NAVER LABS Europe performed extremely well – ranking first in challenge 1, fourth in challenge 2, and second in challenge 3 demonstrating a method that can be generalized and relevant for a broad array of applications.
The robust and versatile performance of our method is due to its combination of robust image retrieval APGeM, robust local features R2D2 (which won the 2019 local feature challenge), and our own visual localization pipeline KAPTURE, which introduces a new data format and a convenient pipeline for processing visual localization and structure from motion data.
Visual Localization
GPS, which relies on the signals of overhead satellites, is an excellent and easy way to obtain exact coordinates, but only in some scenarios. In urban areas, tall buildings form shadows and canyons that can reflect or block these signals. Worse, GPS signals can’t penetrate the tops and sides of buildings, so these signals can’t be reliably used indoors.
In contrast, visual localization can work well in these scenarios.
To locate the position and orientation of an autonomous robot using its cameras, the information in the camera images is processed and then matched with some representation of the environment. There are several ways to do this, such as structure-based methods, which use large-scale 3D reconstructions and local correspondence analysis, scene point regression, which estimates the 3D position for each pixel of the image and absolute pose regression, which directly estimates the camera pose from an input image.
Although vision-based localization can work well in many scenarios, several challenges remain. For instance, most methods rely on a representation of the environment built using images, but the time of day or season of the year can change the appearance of a scene. Furthermore, buildings can change, and the images in the map, even one that is carefully collected, might not cover all the viewpoints that a camera could encounter.
Another set of challenges arises because some types of images are difficult to process reliably; for instance, images that include changes in illumination (or not enough illumination), moving objects, noise such as motion blur, and scenes with little texture.
Finally, reliable real-time performance will be necessary in many applications that use vision-based localization such as autonomous vehicles, robots and augmented reality (AR) software.
Like many visual localization approaches, ours addresses these challenges using structure from motion, which computes local features in 2D images and then matches them to create a 3D reconstruction of the environment. In this approach, we employ our robust image retrieval method APGeM along with our robust local features R2D2. This method is implemented in our own visual localization pipeline KAPTURE.
Robust local features and robust image retrieval (R2D2 + APGeM)
To extract the local features, we use our sparse keypoint detector and descriptor R2D2, which uses a model trained on synthetic image pairs with known transformations. R2D2 performs detection and description jointly, but estimates keypoint reliability and repeatability separately.
Structure from motion performs localization by considering all possible image pairs, but this is a computationally expensive process. This makes it impractical for 3D reconstructions of large-scale environments such as big buildings or even cities. In such cases, image retrieval methods can be used to quickly retrieve only the most relevant images from a structure-from-motion model. This provides initial guesses about the location, which are then refined using a full feature correspondence search.
To retrieve similar images, we use our off-the-shelf deep visual representation APGeM, which consists of a generalized meanpooling (GeM) layer to aggregate feature maps into a compact, fixed-length representation. Mean average precision (mAP) was used to train the model to retrieve landmarks on the Google Landmarks dataset. Although retrieval representations designed for geolocalization have been used in other visual localization methods, we found APGeM to be more suitable for our purposes.
Finally, the COLMAP structure-from-motion library is used to perform the geometric verification of the matches, image registration, and point triangulation needed for 3D reconstruction.
The workflow approach consists of two major pipelines:
Although they accomplish different tasks, the pipelines employ APGeM, R2D2 and COLMAP in similar ways.
Introducing the KAPTURE pipeline
When running a visual localization pipeline on several datasets, one challenge is to convert those datasets into a format that the algorithm and all the tools can handle. To facilitate our experiments, we created KAPTURE, a flexible data format and processing pipeline for structure from motion and visual localization, which we plan to release publicly in the near future.
Many structure-from-motion formats exist, notably the ones from Bundler, VisualSFM, OpenMVG, and COLMAP, but none met all our requirements. In particular, we needed a format that could handle not only timestamps, shared camera parameters, and multi-camera rigs, but also reconstruction data (such as descriptors, global features, 3D points and matches). Moreover, we wanted a format that would be flexible and easy to use in localization experiments. Finally, it should be easy to convert data into other formats supported by major open-source projects such as OpenMVG and COLMAP.
Inspired by OpenMVG and COLMAP, KAPTURE started as a pure data format that provides a good representation of all the information we needed. It then grew into a Python toolbox and library for data manipulation (such as data format conversion, dataset merging/splitting, and trajectory visualization), and finally it became the basis for our mapping and localization pipeline.
Currently, the KAPTURE format can be used to store sensor data such as images, camera parameters, camera rigs, trajectories as well as other sensor data such as LiDAR and WiFi records. It can also be used to store reconstruction data, in particular local descriptors, keypoints, global features, 3D points, observations, and matches.
Results and Future Work
Our method obtains state-of-the-art performance on the various public datasets in the VisLocOdomMapCVPR2020 challenge (the Aachen Day-Night, Inloc, RobotCar Seasons, Extended CMU-Seasons, and SILDa Weather and Time of Day datasets). The images in these datasets include changes in the time of day, season of the year, and outdated reference representations. Images with occlusion, motion blur, extreme viewpoint changes, and low-texture areas are also included. These results demonstrate the excellent ability of our method to generalize and handle many possible changes in the appearance of scenes.
The top rankings in the results from the VisLocOdomMapCVPR2020 challenges further demonstrate its wide variety of potential applications. Specifically, we ranked first in challenge 1, which focuses on autonomous vehicles and provides a sequences of query images with motion limited to a plane. However, our method also performed competitively in challenge 2, which addresses a very different scenario that provides only single query images and unlimited 6-degrees of freedom motion.
Although we could have achieved the same results without KAPTURE, it would have taken much longer because the experiments would have had to be manually run. We believe that the KAPTURE format and tools could be useful to the community, so we plan to release them as an open-source project very soon. We’d also like to provide the major public datasets in the visual localization domain in this format to facilitate future experiments for everybody.
Finally, we believe that global as well as local image features can be further improved to increase the robustness of visual localization, so we plan to continue research in this direction. However, we also believe that large 3D point clouds are quite difficult to maintain in practice. So, we’re also investigating methods that provide image poses without the need for 3D reconstruction.
Research team who contributed to the technology and challenge (in alphabetical order):
Yohann Cabon, Gabriela Csurka Khedari, Nicolas Guérin, Vincent Leroy, Julien Morat, Noé Pion, Philippe Rerole, Jérôme Revaud, Cesar De Souza, Philippe Weinzaepfel
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.