Built upon the breakthrough 3D reconstruction framework DUSt3R, the new model ‘MASt3R’ provides metric 3D reconstruction and dense local feature maps capable of handling thousands of images.
Shuzhe Wang, Vincent Leroy, Yohann Cabon, Boris Chidlovskii, Jérome Revaud |
2024 |
Built upon the breakthrough 3D reconstruction framework DUSt3R, the new model ‘MASt3R’ provides metric 3D reconstruction and dense local feature maps capable of handling thousands of images.
In the ever-evolving world of computer vision, the quest for more accurate and efficient 3D techniques continues to drive innovation. One of the latest breakthroughs in this field is MASt3R which stands for “Matching and Stereo 3D Reconstruction’. MASt3R(1) brings a new level of precision and detail to 3D reconstruction and localization tasks by providing pixel correspondences for even very large image collections. The remarkable results of MASt3R are achieved by adding an extra head to the DUSt3R framework(2) and a matching algorithm so that it efficiently outputs a 3D reconstruction that is metric along with dense local feature maps, providing accurate depth perception and spatial understanding. This metric scaling is critical for many real-world applications where precise measurements are necessary such as construction work and autonomous robot navigation and manipulation. MASt3R has the potential to transform a wide range of real-world applications, from autonomous navigation and robotics to digital twin smart cities and immersive VR/AR technologies.
The performance of MASt3R is nothing short of impressive and in particular in map-free relocalization, the process of determining a camera’s position and orientation in a space from a single image pair, without relying on a previously constructed map. This is a particularly challenging task in computer vision and robotics (3) and traditional visual localization methods have always required a detailed map. In the map-free public benchmark, MASt3R significantly outperforms other methods with an overall improvement in accuracy of 30%! It reduces the median translation error (the discrepancy between the actual position of a camera and the predicted position by the system) by a third at 0.36 compared to next best at 1.17 (Video 1). Minimizing the translation error is essential for the effective operation of systems that rely on precise location information in environments where pre-built maps are unavailable or impractical to use. This makes it possible for robots to not only navigate better but to do so in even completely unknown locations.
MASt3R cuts the median rotation error (actual orientation of the camera vs. the predicted orientation) by 80%, taking it down to 2.2 degrees compared to the next closest median error which stands at 11 degrees. Accurate orientation is critical for various applications, such as aligning visual data correctly in augmented reality (AR) or ensuring the correct movement of autonomous robots.
In AR and VR, high rotation accuracy is essential for maintaining the realism and immersion of the experience. In robotics, accurate orientation is crucial for tasks like navigation and manipulation, where the robot must understand its pose to interact correctly with the environment.
When it comes to 3D reconstruction from multiple images (MVS), MASt3R also shines. Its capability to handle hundreds or even thousands of images efficiently allows for detailed and accurate reconstructions of complex environments, such as cityscapes or intricate indoor spaces. Despite not having any camera pose information, nor being trained on the dataset, MASt3R is able to instantly match images with an average error as small as 0.374mm making it generally applicable to any setting to provide reconstruction that is extremely precise. This level of detail and accuracy opens up new possibilities for creating highly realistic 3D models of real-world environments in real time.
The advancements brought by MASt3R have far-reaching implications for a variety of real-life applications. In autonomous navigation, for example, accurate 3D reconstructions and precise localization are essential for ensuring the safety and reliability of autonomous vehicles and drones. These systems depend on precise spatial awareness to navigate through dynamic environments effectively.
In the realm of robotics, improved depth perception and spatial understanding enable robots to interact more intelligently with their surroundings and perform tasks with greater accuracy and efficiency.
The mapping and surveying industries also stand to benefit significantly from MASt3R’s capabilities. Creating detailed 3D maps of large areas, including urban landscapes and building interiors, becomes more efficient and accurate, facilitating better planning and management.
Moreover, in the world of augmented reality (AR) and virtual reality (VR), precise depth and localization information are vital for enhancing the realism and accuracy of these immersive experiences. With MASt3R, AR and VR applications can achieve a new level of interactivity and realism, making them more engaging and useful.
MASt3R demonstrates that the breakthrough DUSt3R framework can be easily extended to multiple, varied 3D vision tasks retaining the simplicity and robustness which distinguish it from other approaches. This 3D foundation model will unlock new possibilities for how we perceive and interact with the three-dimensional world around us.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.