MACHINE LEARNING AND OPTIMIZATION
Innovative models to design algorithms and imagine new tasks that push the boundaries and bring to life intelligent systems in our everyday lives.
- Paper at RSS 2022, DiPCAN: Distilling Privileged information for Crowd-Aware Navigation
- HRI 2022 workshop paper (LEAP-HRI) Human in the Lifelong Reinforcement Learning Loop
- Co-organizing the 4th Robot Learning Workshop: Self-Supervised and Lifelong Learning at NeurIPS
- Paper at ICML on DCRL – demonstration-controlled reinforcement learning
- 2 papers at ICRA 2021 on robot task learning and robot navigation.
- Paper at EACL 2021 on Globalizing BERT-based transformer architectures for long document summarization
- Co-organizing the 3rd Robot Learning Workshop: Grounding Machine Learning Development in the Real World at NeurIPS 2020
- Paper at ICML 2020, ‘A quantile-based approach for hyperparameter transfer leaning‘
During the last two decades, research in machine learning has evolved from the status of promising science to industrial reality. Formalized as an optimization task under constraint or as a mathematical integration, solutions to problems now exist that were previously considered beyond reach. In this context, the disciplines of machine learning and optimization constitute a cornerstone of the conception and development of systems with the capabilities to adapt and enhance through time.
We propose innovative models to design algorithms and imagine new tasks that push the possibilities given by this incoming revolution. These will make our vision a reality by bringing to life intelligent systems to supervise, enhance, secure and automate our everyday activities.
We work across deep learning, autonomous indoor robotics, adversarial learning protocols, machine reading and optimization in large graphs. We contribute to the development of the cutting edge products of NAVER LABS and are very active in the scientific community where we produce papers, contribute code and datasets and organise conferences, workshops and challenges.
- DiPCAN: Distilling Privileged information for Crowd-Aware Navigation, Gianluca Monaci, Michel Aractingi, Tomi Silander, RSS 2022
- Human in the Lifelong Reinforcement Learning Loop, Thierry Jacquin, Julien Perez, Cecile Boulard, LEAP-HRI workshop, HRI 2022.
- Demonstration-conditioned reinforcement learning for few-shot imitation, Theo Cachet, Julien Perez, Chris Dance, ICML 2021
- Learning reachable manifold and inverse mapping for a redundant robot manipulator, Seungsu Kim, Julien Perez, ICRA 2021
- Risk conditioned distributional soft actor-critic for risk-sensitive navigation, Jinyoung Choi, Christopher Dance, Jung-eun Kim, Seulbin Hwang, Kyung-sik Park, ICRA 2021
- Globalizing BERT-based transformer architectures for long document summarization, Quentin Grail, Julien Perez, Eric Gaussier, EACL 2021
- Transformer-based meta-imitation learning for robotic manipulation, Theo Cachet, Julien Perez, Seungsu Kim, 3rd workshop on robot learning, NeurIPS 2020
- Faster preprocessing for the trip-based public transit routing algorithm, Vassilissa Lehoux, Christelle Loiodice, ATMOS 2020
- Fast adaptation of deep reinforcement learning-based navigation skills to human preference, Jinyoung choi, Christopher Dance, Jung-eun Kim, Kyung-sik Park, Jaehun Han, Joonho seo, Minsu Kim, ICRA 2020
- A quantile-based approach for hyperparameter transfer learning, David Salinas, Huibin Shen, Valerio Perrone, ICML 2020
- Bayesian Network Fisher Kernel for Categorical Feature Spaces, Janne Leppa-aho, Tomi Silander, Teemu Roos, Behaviormetrika, January 2020
- Optimal Policies for Observing Time Series and Related Restless Bandit Problems
Christopher Dance, Tomi Silander, Journal of Machine Learning Research (JMLR), 20 (35), pp. 1-93
- Adversarial networks for machine reading; Quentin Grail, Julien Perez, Tomi Silander; Revue TAL, 59, 2019
Machine Learning and Optimization team:
This team is empty.