NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration.
Date: 24th June 2021, 10:00 am (GMT +02.00)
Data-efficient, robust, and adaptive reinforcement learning
About the speaker: Joschka Boedecker is an assistant professor of neurorobotics at the University of Freiburg, Germany. He studied computer science at the University of Koblenz-Landau, Germany, and artificial intelligence at the University of Georgia, USA. He received his PhD degree in engineering from Osaka University, Japan, in 2011. He was a postdoctoral fellow with Minoru Asada in Osaka and with Martin Riedmiller in Freiburg before starting as an assistant professor.
His research interests are at the intersection of machine learning and robotics, with a focus on deep reinforcement learning.
Abstract: Despite substantial progress in robotics, robots are still rarely to be seen outside of well-controlled factory environments. To enable them to work alongside people in various everyday environments and tasks that are difficult to specify in detail a-priori, robots will need to become much more robust, adaptive and more intuitive to program than they currently are. Machine learning methods are promising in this regard, but they often need huge amounts of data to work well. In robot learning scenarios, however, collecting these kinds of data sets is often not realistic, causing too much wear and tear for the robot hardware, as well as prohibitively long waiting times for users. In this talk, I will present our work spanning the spectrum from fundamental efficiency improvements of tabular and deep Reinforcement Learning algorithms to applications on intelligent robotic systems. I will illustrate how dynamic input representations, combinations of model-based and model-free Reinforcement Learning techniques, and novel inverse Q-Learning algorithms increase the aspects of robustness, adaptivity and efficiency of learning robots, and how they enable a more intuitive specification of the user’s desires.