The 3rd NAVER LABS Europe International Workshop on
15 – 16 November 2023
Today’s developments in machine learning heavily focus on big data approaches. However, many applications in robotics require interactive learning approaches that can rely on only a few demonstrations or trials. The main challenge boils down to finding structures that can be used in a wide range of tasks, which requires us to advance on several fronts, including (from low level to high level): data structures, geometric structures and combination structures, all of which will be discussed in this presentation. As an example of data structures, this talk will discuss the use of tensor factorization techniques that can be used in global optimization problems to efficiently extract and compress information, while providing diverse human-guided learning capabilities (imitation and environment scaffolding). As examples of geometric structures, the use of Riemannian geometry and geometric algebra in robotics will be covered, where prior knowledge about the physical world can be embedded within the representations of skills and associated learning algorithms. A brief overview will also be given of combination structures, which relate to movement primitive and behavior primitive representations in robotics, that can be embedded within optimal control problem formulations.
As robots develop autonomy and integrate into daily environments to assist and collaborate with humans, they should not only perform tasks successfully but also adhere to social norms and expectations. This talk will give an overview of definitions and key concepts as well as the challenges involved in building socially acceptable robots. It will draw from our ongoing research to present examples, including how robots can navigate crowded environments with social awareness, how they can explain their actions to users and how they can learn to imitate human behaviours.
In deployment scenarios such as homes and warehouses, mobile robots are expected to autonomously navigate for extended periods, seamlessly executing tasks articulated in terms that are intuitively understandable by human operators. We present GO To Any Thing (GOAT), a universal navigation system capable of tackling these requirements with three key features: a) Multimodal: it can tackle goals specified via category labels, target images, and language descriptions, b) Lifelong: it benefits from its past experience in the same environment, and c) Platform Agnostic: it can be quickly deployed on robots with different embodiments. GOAT is made possible through a modular system design and a continually augmented instance-aware semantic memory that keeps track of the appearance of objects from different viewpoints in addition to category-level semantics. This enables GOAT to distinguish between different instances of the same category to enable navigation to targets specified by images and language descriptions. In experimental comparisons spanning over 90 hours in 9 different homes consisting of 675 goals selected across 200+ different object instances, we find GOAT achieves an overall success rate of 83%, surpassing previous methods and ablations by 32% (absolute improvement). GOAT improves with experience in the environment, from a 60% success rate at the first goal to a 90% success after exploration. In addition, we demonstrate that GOAT can readily be applied to downstream tasks such as pick and place and social navigation.
Martial Hebert is University Professor of Robotics at Carnegie Mellon University. He served as Director of the Robotics Institute and is now the Dean of the School of Computer Science. His research is in the area of Computer Vision, with applications to ground and aerial autonomous systems. His most recent interests are in self supervised learning for videos analysis for robotics applications.
This session will discuss recent approaches to self-supervised learning of general models for video analysis. The goal of this work is to learn models that can be used for downstream tasks in the context of autonomous navigation and manipulation: object detection, motion, segmentation, etc. In particular, we will investigate how the use of motion information can be used to make self supervised approaches effective on videos in real world environments.
Nicolas Obin is associate professor (2013) at Sorbonne Université and research scientist in the Sound Analysis and Synthesis team at the Science and Technology for Sound and Music laboratory (Ircam, CNRS, Sorbonne Université, Ministère de la Culture). He is interested in human, animal, and robot communication and his main research area is the generative modeling of complex human productions with various applications in sound, speech, and music generation. As part of his artistic commitment, he actively promotes digital science and technology for arts, culture, and heritage, with many collaborations with internationally renowned artists.
speech synthesis, neural networks, voice cloning, voice and language design, human-robot interaction
Autonomous robots that can assist humans in daily life situations have been a long standing vision of robotics, artificial intelligence and cognitive sciences. A first step towards this goal is to create robots that can learn tasks triggered by environmental context or higher level instruction. However, learning techniques have yet to live up to this promise as only a few methods manage to scale to high-dimensional manipulators or humanoid robots. This talk will present a general framework suitable for learning motor skills in robotics which is based on the principles behind many analytical robotics approaches and discuss learning on three different levels of abstraction, i.e., learning for accurate control to execute, learning of motor primitives to acquire simple movements, and learning of the task-dependent „hyperparameters“ of these motor primitives to learn complex tasks. Task-appropriate learning approaches for imitation learning, model learning and reinforcement learning for robots with many degrees of freedom will be discussed. Empirical evaluations on several robot systems will illustrate the effectiveness and applicability to learning control on an anthropomorphic robot arm.
Autonomous drones play a crucial role in search-and-rescue, delivery and inspection missions, and they promise to increase productivity by a factor of 10. However, these drones are still far from human pilots regarding speed, versatility and robustness. What does it take to fly autonomous drones as agile as, or even better than, human pilots? Autonomous, agile navigation through unknown, GPS-denied environments poses several challenges for robotics research in terms of perception, learning, planning and control. This talk will show how the combination of both model-based and machine learning methods united with the power of new, low-latency sensors, such as event cameras, can allow drones to achieve unprecedented speed and robustness by relying solely on onboard computing. This can result in better productivity and greater safety of future autonomous aircraft.
Cordelia Schmid is a research director at Inria and has a joint appointment at Google research. She is a member of the German National Academy of Sciences, Leopoldina and a fellow of IEEE and the ELLIS society. She was awarded the Longuet-Higgins prize in 2006, 2014 and 2016, the Koenderink prize in 2018 and the Helmholtz prize in 2023, for fundamental contributions in computer vision that have withstood the test of time. She received an ERC advanced grant in 2013, the Humbolt research award in 2015, the Inria & French Academy of Science Grand Prix in 2016, the Royal Society Milner award in 2020, the PAMI distinguished researcher award in 2021 and the Körber European Science Prize in 2023.
vision-guided robotics
Future artificial agents immersed in our society will have to learn under the guidance of their users. In “Towards Teachable Autotelic Agents”, Olivier and colleagues have listed some of the requirements to endow these agents with the capability of being efficiently taught by humans. Among other things, they have outlined that such agents crucially need to be able to set and learn to pursue their own goals driven by some intrinsic motivation. A survey of the literature has also shown however that most such existing agents lacking *inferential social learning* capabilities, as outlined by the work of Hyowon Gweon. This talk will describe ongoing efforts to fill this gap. It will be shown how a simple Bayesian goal inference mechanism can be leveraged to endow an artificial teacher with pedagogical teaching capabilities and an artificial learner with pragmatic interpretation capabilities. It will illustrate these capabilities with sensorimotor demonstrations, language instructions and a combination of both. It will then be shown how this first teacher-learner interaction framework can be enriched by introducing in the agents the capability to model each other, endowing them with a preliminary form of theory of mind. In particular, the pros and cons of modeling these additional capabilities with a Bayesian framework versus deep neural networks will be highlighted and some of the related literature discussed.
Language models have emerged as a game-changing technology in the field of artificial intelligence, often compared as a general knowledge and common sense base. They serve as the basis of numerous AI systems, allowing quick adaptation to new domains. One line of research focuses on integrating multimodal signals in the pre-training of language models to enhance the perception and the semantics of words or elements.This talk will focus on the robotics application domain and provide a literature review on how language models can be leveraged to enhance robot actions. Among other aspects, it will include the investigation of the planning capabilities of language models to generate sequences of actions.
Embodied AI is a rising paradigm of AI that targets enabling agents to interact with the physical world. Embodied agents can acquire a large amount of data through interaction with the physical world, which makes it possible for agents to close the perception-cognition-action loop and learn autonomously from the world to revise its internal model. This talk will present the big picture of work from my group in recent years to build an eco-system to support the development of embodied AI, which includes data collection by designing sample efficient model predictive control algorithms, low sim2real domain gap simulation technology and tele-operation.
This talk will summarize recent work on bridging the gap between natural language and 3D human motions. Gul will show results on text-to-motion synthesis, i.e. text-conditioned generative models for controllable motion synthesis, with a special focus on compositionality to handle fine-grained textual descriptions. Text-to-motion retrieval model results will also be shared. Papers relevant to the work which will be presented are ACTOR, TEMOS, TMR [Petrovich 2021, 2022, 2023] and TEACH, SINC [Athanasiou 2022, 2023].
Markus Wulfmeier is a researcher in machine learning and robotics at Google DeepMind with a focus on data-driven knowledge transfer. His work aims at efficiently scalable algorithms applicable across a variety of real-world applications including robotic locomotion, manipulation and navigations. Markus was a postdoctoral research scientist at the Oxford Robotics Institute and a member of Oxford University’s New College where he completed his PhD. Over the years, he has held short-term visiting scholar positions with UC Berkeley, MIT and the Swiss Federal Institute of Technology.
The recent, vast progress in artificial intelligence, in particular deep learning for vision and language processing, has not been fully matched in the control of real-world systems, such as autonomous vehicles and other robotic platforms. While we have made strides on the perception side based on knowledge extracted from immense, web-based datasets, the decision making problem remains a challenge. This talk focuses on a critical limitation: the availability of diverse, relevant data for control. Methods reinforcing the acquisition of data and the transfer of data-driven knowledge to address this problem will be discussed, concluding with questions about the role of interactive learning at a time of ever-growing datasets and machine learning models.
“Making new connections – AI for the physical world at NAVER LABS Europe”
Read more
“Learning General Models for Computer Vision Tasks”
Read more
“Inductive Priors for Robot Learning”
Read more
“GOAT: Go to AnyThing”
Read more
“Socially Acceptable Human-Robot Interaction Systems”
Read more
“Is Human Motion a Language without Words?”
Read more
Every day NAVER robot ‘AROUND’ which functions in NAVER’s office building ‘17844’ in South Korea, with current AI research from NAVER LABS Europe to improve capabilities.
“Robot Learning from Few Samples by Exploiting the Structure and Geometry of Data”
Read more
“Large-Scale Interactive Learning”
Read more
“Transformers for Vision-Language Navigation and Manipulation”
Read more
“How to Leverage Language models for Robotics: a Literature Review”
Read more
“The Speaking Machine: Neural Speech Synthesis and Sound Design in Robotics”
Read more
“Human-Level Performance with Autonomous Vision-Based Drones”
Read more
Due to a technical issue with the recording of this talk, we invite you to view an extended version given at ETH Zurich.
“Modeling the 3D Physical World for Embodied Intelligence”
Read more
“Towards Inferential Social Learning in Teachable Autotelic Agents”
Read more
NAVER LABS Europe is organizing ground transportation from Grenoble to the event for all on-site participants.
This event is open and free of charge, but participants are required to register. Please select one of the options below.
Poster submission deadline: 31st October. After the deadline please contact us.
NAVER LABS is the R&D subsidiary of NAVER Corporation, Korea’s leading internet company and a global tech company with hundreds of millions of users worldwide.
The European branch, NAVER LABS Europe, is the biggest industrial AI research center in France and its sister lab, NAVER LABS Korea, is a leading robotics research organisation with projects such as indoor and outdoor mapping, autonomous service robots, the advanced robot manipulator AMBIDEX, large-scale digital twin, and most importantly, 1784, Korea’s first high rise robot-friendly building where a fleet of 100 robots fulfil delivery tasks to employees. 1784 is not only NAVER’s new headquarters, it also serves as a testbed for future robotics solutions, all controlled by NAVER LABS’ own AI Robot Cloud, ARC.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.