Philippe Weinzaepfel, Gabriela CSURKA KHEDARI, Yohann Cabon, Martin Humenberger |
2019 |
This article presents our CVPR’19 paper on “Visual Localization by Learning Objects-of-Interest Dense Match Regression”.
Task and challenges
Visual localization consists of estimating the 6-DoF camera pose (position and orientation) from a single RGB image within a given area, also referred to as ‘map’. It’s particularly useful for indoor locations where there’s no GPS for applications in robot navigation, self-driving cars or augmented reality.
To estimate the camera pose the main difficulties are large changes in viewpoints between the query and training images, incomplete maps, regions with no valuable information (textureless surfaces), symmetric and repetitive elements, varying lighting conditions, structural changes, dynamic objects (e.g. people) and scalability to large areas. Being able to handle dynamic scenes where objects may change between mapping and query time is key to all long-term visual localization applications and is a typical failure case of state-of-the-art methods.
Objects-of-Interest
Our main idea here is to leverage what we call Objects-of-Interest (OOIs). We define an OOI as a discriminative and stable area within the 3D map which can be reliably detected from multiple viewpoints whether partly occluded and under various lighting conditions. Typical examples are paintings in a museum, or storefronts and brand logos in a shopping mall.
OOIs-based Visual Localization
Assuming there’s a database of OOIs, our visual localization approach relies on a CNN to detect the objects-of-interest, segment them and provide a dense set of 2D-2D matches between the detected OOIs and their reference images. Reference images are standard views of the OOIs for which the mapping to the 3D coordinates of the object is given by the database. The CNN architecture we use is inspired by DensePose [1]. By transitivity, we can transform the set of 2D-2D matches with the 2D-3D correspondences of the reference images into a set of 2D-3D matches from which camera localization is obtained by solving a Perspective-n-Point problem using RANSAC.
Our method is carefully designed to tackle the open challenges of visual localization. It has several advantages and few limitations with respect to the state-of-the-art.
One clear limitation of our method is that query images without any OOI cannot be localized. However, in many applications such as AR navigation, OOIs exist most of the time and local pose tracking (e.g. visual-inertial odometry) can be used in-between. OOI detection is interesting by itself in this kind of AR application e.g. to display metadata on paintings in a museum or on shops in malls and airports. Furthermore, in a complex real-world application, OOIs can be used to more easily guide people. Commands such as ‘Take a picture of the closest painting’ might be easier to understand than ‘Take a picture with sufficient visual information’.
The Virtual Gallery dataset
We’ve introduced a new synthetic dataset to study the applicability of our approach but also to measure the impact of varying lighting conditions and occlusions on different localization methods. It consists of a scene containing 3-4 rooms in which 42 free-of-use famous paintings are placed on the walls. The scene was created with Unity software to extract ground-truth information such as depth, semantic and instance segmentations, 2D-2D and 2D-3D correspondences, together with the rendered images.
To study robustness to lighting conditions, we generate the scene using 6 different lighting configurations with significant variations between them. To evaluate robustness to occluders such as visitors, we generate test images which contain randomly placed human body models.
The dataset can be downloaded from our site here: Virtual Gallery Dataset
Result overview
Here’s a short recap of the main findings of our experiments. You can read all the details in the paper.
References
[1] Densepose: Dense human pose estimation in the wild. Güler et al. CVPR’18.
[2] Learning less is more – 6D camera localization via 3D surface regression. Brachmann and Rother. CVPR’18.
[3] Posenet: A convolutional network for real-time 6-DoF camera relocalization. Kendall et al. ICCV’15.
[4] A dataset for benchmarking image-based localization. Sun et al. CVPR’17.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.