The acme of perfection of AR is when you can’t distinguish the virtual from the real. We experiment with AI and computer vision in our AR museum guide ARAO to make the physical experience a natural one.
Yves Hoppenot |
2019 |
The acme of perfection of AR is when you can’t distinguish the virtual from the real. We experiment with AI and computer vision in our AR museum guide ARAO to make the physical experience a natural one.
Virtual elements in Augmented Reality (AR) need to be embedded in the real world as much as possible. This means that, just like us, they should be able to react to their environment. One example of this kind of integration is adapting to the physical environment as if it were a real object i.e. remaining visible if the virtual element is in front of an object and disappearing if it’s behind, something Apple has clearly understood with their new release of ARkit 3. Dynamic effects can also help improve how realistic the AR is if the virtual element is a mobile one.
We investigated how Artificial Intelligence could help address navigation in a museum where a virtual avatar ‘ARAO’ guides the visitor from artwork to artwork. ARAO was designed to act as a real guide with respect to its autonomy and how it pays attention to the visitor but also how it interacts with the both static and dynamic elements in the environment. “ARAO” was embedded in reality through a mobile phone app. The video below shows this experiment.
Technology
To make it work, we modeled the indoor environment off-line as a 3D CAD model. Using an accurate localization offered by ARkit 2, we then aligned the rendering online with reality. As shown with the entry door, the AR app manages static occlusion. Specific indoor elements like mirrors, are also modeled, to provide specific rendering.
For the bottle effect, we used a 2D object detector deep network, inspired from the real-time object detection Yolo that we retrained and made run on an iPhone. From the detected 2D bounding box and in conjunction with ARkit, we’re able to guess the object pose and size in the 3D space. It allows us to manage the animation of the avatar and visual occlusion.
The AR app also runs a human pose estimation deep network on the iPhone to detect a human and her/his pose when the phone “sees” someone at a time when ARkit didn’t yet propose such a feature. The pose estimation was inspired from different state of the art solutions like OpenPose or PoseNet and ported on an iPhone. Again, thanks to ARkit 2, the app is able to guess the 3D position of the human and hence the control of avatar animation and visual occlusion.
User Experience
To create a tour for the museum visitor, he/she points ARAO on the piece of artwork of interest for a few seconds until it detects what it is and subsequently selects the corresponding tour. In the case shown in the video, it’s the tour called “Main Contributors”, a subset of the exhibition. ARAO starts the tour by showing you which to go. ARAO stops at each artwork of the tour to indicate that it’s included. Additional content describing the artwork is not included in the video just because we wanted to stay focused on the guidance but it is of course feasible. What we have shown however is that AR can give a new angle when viewing artwork e.g. by adding the depth dimension to a 2D painting.
We wanted to give some freedom to the visitors so that they can easily stray from the tour if attracted to another piece of art or take a rest somewhere without interference from ARAO. When this happens, ARAO adapts by either showing the closest artwork first or by waiting before calling for attention. In other modes, such as the ‘take me straight there’ path (not shown in the video), ARAO is more insistent if you don’t immediately follow.
As in the real world, unexpected things come up like encountering a closed door. In this case, ARAO behaves in the most natural way possible i.e. by requesting help to open the door.
The ARAO team are Nicolas Monet, Claudine Combe, Hadrien Combaluzier and Yves Hoppenot. The project was carried out in collaboration with 3D emotion for the design of ARAO and the animations.
For more information please contact the author, Yves Hoppenot.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.