Machine Learning for Robotics
Making embodied agents more robust and safer in changing, everyday environments with robot learning, manipulation and navigation, locomotion, robot task learning and reinforcement learning.
Highlights
2022
- DiPCAN: Distilling Privileged information for Crowd-Aware Navigation nominated for Best Paper @RSS 2022
- HRI 2022 workshop paper (LEAP-HRI) Human in the Lifelong Reinforcement Learning Loop
2021
- Co-organizing the 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS
- Paper at ICML on DCRL – demonstration-controlled reinforcement learning
- 2 papers at ICRA 2021 on robot task learning and robot navigation.
- Paper at EACL 2021 on Globalizing BERT-based transformer architectures for long document summarization
Related Content

Modern robotics represents an immense source of opportunity to facilitate numerous facets of our daily life with recent advances in statistical machine learning and deep learning pushing the boundaries, previously established by optimal control theory. We focus on leveraging statistic learning and sequential decision theory to cope with long-horizon task learning and task generalization to improve the robustness and safety of embodied agents in unstructured and unpredictable environments. Our research covers the main applications of modern robotics such as control in navigation and manipulation with challenging input modalities in the shape of images, natural language or touch-sensing. We address challenging contexts like deformable object manipulation and multi-robot coordination for navigation. Our work covers the spectrum from the development of efficient and robust learning algorithms to novel, expressive model definitions and challenging everyday tasks in human crowded environments. Embodied agent scenarios naturally require the capability to continuously adapt to ever-changing environments, evolving supervision of human experts in the field, experts without machine learning knowledge.
Our second goal is to better understand and develop the field of machine learning itself because robotics give us the ideal opportunity to challenge the limits of our assumptions about this fascinating research subject. In this context, it is the concepts of common-sense acquisition and reasoning in a complex situation that remain the most important long-term focus of our research effort. We combine machine learning theory, optimal control and Human-Computer Interaction focused on long-term-oriented problems with great relevance to NAVER cloud robotics and Robotics-as-a-Service platforms.
The research lab is part of the machine learning and robotic learning community, and our activities are often pursued in collaboration with external academic partners.
Recent publications
- DiPCAN: Distilling Privileged information for Crowd-Aware Navigation, Gianluca Monaci, Michel Aractingi, Tomi Silander, RSS 2022
- Human in the Lifelong Reinforcement Learning Loop, Thierry Jacquin, Julien Perez, Cecile Boulard, LEAP-HRI workshop, HRI 2022.
- Demonstration-conditioned reinforcement learning for few-shot imitation, Theo Cachet, Julien Perez, Chris Dance, ICML 2021
- Learning reachable manifold and inverse mapping for a redundant robot manipulator, Seungsu Kim, Julien Perez, ICRA 2021
- Risk conditioned distributional soft actor-critic for risk-sensitive navigation, Jinyoung Choi, Christopher Dance, Jung-eun Kim, Seulbin Hwang, Kyung-sik Park, ICRA 2021
- Transformer-based meta-imitation learning for robotic manipulation, Theo Cachet, Julien Perez, Seungsu Kim, 3rd workshop on robot learning, NeurIPS 2020
- Fast adaptation of deep reinforcement learning-based navigation skills to human preference, Jinyoung choi, Christopher Dance, Jung-eun Kim, Kyung-sik Park, Jaehun Han, Joonho seo, Minsu Kim, ICRA 2020
- A quantile-based approach for hyperparameter transfer learning, David Salinas, Huibin Shen, Valerio Perrone, ICML 2020
- Bayesian Network Fisher Kernel for Categorical Feature Spaces, Janne Leppa-aho, Tomi Silander, Teemu Roos, Behaviormetrika, January 2020
- Optimal Policies for Observing Time Series and Related Restless Bandit Problems
Christopher Dance, Tomi Silander, Journal of Machine Learning Research (JMLR), 20 (35), pp. 1-93
Machine Learning for Robotics team: