Machine Learning for Robotics

Making embodied agents more robust and safer in changing, everyday environments with robot learning, manipulation and navigation, locomotion, robot task learning and reinforcement learning.




Related Content


Modern robotics represents an immense source of opportunity to facilitate numerous facets of our daily life with recent advances in statistical machine learning and deep learning pushing the boundaries, previously established by optimal control theory. We focus on leveraging statistic learning and sequential decision theory to cope with long-horizon task learning and task generalization to improve the robustness and safety of embodied agents in unstructured and unpredictable environments. Our research covers the main applications of modern robotics such as control in navigation and manipulation with challenging input modalities in the shape of images, natural language or touch-sensing. We address challenging contexts like deformable object manipulation and multi-robot coordination for navigation. Our work covers the spectrum from the development of efficient and robust learning algorithms to novel, expressive model definitions and challenging everyday tasks in human crowded environments. Embodied agent scenarios naturally require the capability to continuously adapt to ever-changing environments, evolving supervision of human experts in the field, experts without machine learning knowledge.

Our second goal is to better understand and develop the field of machine learning itself because robotics give us the ideal opportunity to challenge the limits of our assumptions about this fascinating research subject. In this context, it is the concepts of common-sense acquisition and reasoning in a complex situation that remain the most important long-term focus of our research effort. We combine machine learning theory, optimal control and Human-Computer Interaction focused on long-term-oriented problems with great relevance to NAVER cloud robotics and Robotics-as-a-Service platforms.

The research lab is part of the machine learning and robotic learning community, and our activities are often pursued in collaboration with external academic partners.

Learning Robot Manipulation Blog
A unified framework for robot arm path planning—which combines offline modelling with inverse-solution mapping based on data-driven statistical techniques—increases computational efficiency and dramatically reduces robot-operation complexity. Blog article by Seungsu Kim and Julien Perez
DCRL A new family of approaches to few-shot imitation
A new family of approaches to few-shot imitation: demonstration-conditioned reinforcement learning. Blog by Theo Cachet, Julien Perez and Chris Dance.
Risk Sensitive Robot Navigation
We explore how effectively a single policy learned by reinforcement learning can modulate robot behaviour, from risk-averse (cautious) to risk-neutral (maximizing the average reward). Blog by Chris Dance et al.

Recent publications

Machine Learning for Robotics team:

Michel Aractingi
PhD candidate
Cécile Boulard
Theo Cachet
David Emukpere
Paul Jansonnie
PhD candidate
Michael Niemaz
Julien Perez
Team Lead
Denys Proux
Bingbing Wu

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.


Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.