In 2025 the computer vision team published five new *St3R models based on our original DUSt3R model, pushing 3D reconstruction further than ever to make it larger-scale, lighter and faster, richer in multimodality and semantics and increasingly human-centric.
We took a major step at the end of year by releasing Anny as fully open source under Apache 2.0. Anny is a parametric 3D human model and the first model to handle people of all age from babies and infants to the elderly .
Finally, with carefully designed distillation, the team showed that all of these capabilities in geometry, semantics and humans can be brought together in a single, universal encoder.
Dive into the highlights via the thumbnails below and, if you’re curious for more, a list of additional work which couldn’t make it in the video awaits you at the end!
Kinaema: a recurrent sequence model for memory and pose in motion, Mert Bulent Sariyildiz, Philippe Weinzaepfel, Guillaume Bono, Gianluca Monaci, Christian Wolf, NeurIPS 2025
RANa: Retrieval-Augmented Navigation, Gianluca Monaci, Rafael Sampaio De Rezende, Romain Deffayet, Gabriela Csurka, Guillaume Bono, Herve Dejean, Stephane Clinchant, Christian Wolf, TMLR
Human mesh modeling for Anny body, Romain Bregier, Guenole Fiche, Laura Bravo-Sanchez, Thomas Lucas, Matthieu Armando, Philippe Weinzaepfel, Gregory Rogez, Fabien Baradel, arXiv:2511.03589
HAMSt3R: Human Aware Multi-view Stereo 3D Reconstruction, Sara Rojas Martinez, Matthieu Armando, Bernard Ghanem, Philippe Weinzaepfel, Vincent Leroy, Grégory Rogez, ICCV 2025
PanSt3R: Multi-view Consistent Panoptic Segmentation, Lojze Zust, Yohann Cabon, Juliette Marrie, Leonid Antsfeld, Boris Chidlovskii, Jérome Revaud, Gabriela Csurka, ICCV 2025
LUDVIG: Learning-Free Uplifting of 2D Visual Features to Gaussian Splatting Scene, Juliette Marrie, Romain Menegaux, Michael Arbel, Diane Larlus, Julien Mairal, ICCV 2025
MUSt3R: Multi-view Network for Stereo 3D Reconstruction (Highlight), Yohann Cabon, Lucas Stoffl, Leonid Antsfeld, Gabriela Csurka, Boris Chidlovskii, Jérome Revaud, Vincent Leroy, CVPR 2025
Geo4D: Leveraging Video Generators for Geometric 4D Scene Reconstruction (Highlight), Zeren Jiang, Chuanxia Zheng, Iro Laina, Diane Larlus, Andrea Vedaldi, ICCV 2025
Test-time vocabulary adaptation for language-driven object detection, Mingxuan Liu, Tyler L. Hayes, Massimiliano Mancini, Elisa Ricci, Riccardo Volpi, Gabriela Csurka, ICIP 2025
Pow3R: empowering unconstrained 3D reconstruction with camera and scene priors, Wonbong Jang, Philippe Weinzaepfel, Vincent Leroy, Lourdes Agapito, Jérome Revaud, CVPR 2025
MEGA: Masked Generative Autoencoder for Human Mesh Recovery (Oral), Guénolé Fiche, Simon Leglaive, Xavier Alameda-Pineda, Francesc Moreno-Noguer, CVPR 2025
LPOSS: Label Propagation over patches and pixels for Open-vocabulary Semantic Segmentation, Vladan Stojnić, Yannis Kalantidis, Jiri Matas, Giorgos Tolias, CVPR 2025
Layered motion fusion: lifting motion segmentation to 3D in egocentric videos, Vadim Tschernezki, Diane Larlus, Andrea Vedaldi, Iro Laina, CVPR 2025
DUNE: Distilling a UNiversal Encoder from heterogeneous 2D and 3D teachers, Mert Bülent Sarıyıldız, Philippe Weinzaepfel, Thomas Lucas, Pau de Jorge, Diane Larlus, Yannis Kalantidis, CVPR 2025
Gaussian splatting feature fields for (privacy-preserving) visual localization, Maxime Pietrantoni, Gabriela Csurka, Torsten Sattler, CVPR 2025
CondiMen: Conditional Multi-Person Mesh Recovery, Romain Brégier, Fabien Baradel, Thomas Lucas, Salma Galaaoui, Matthieu Armando, Philippe Weinzaepfel, Grégory Rogez RHOBIN workshop, CVPR 2025
HOSt3R: Keypoint-free Hand-Object 3D Reconstruction from RGB images, Anilkumar Swamy, Vincent Leroy, Philippe Weinzaepfel, Jean-Sebastien Franco, Gregory Rogez, HANDS workshop, CVPR 2025
MASt3R-SfM: a fully-integrated solution for unconstrained Structure-from-Motion (Oral and Best student paper award), Bardienus Duisterhof, Lojze Zust, Philippe Weinzaepfel, Vincent Leroy, Yohann Cabon, Jérome Revaud, 3DV 2025
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.

To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.

NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.