NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration.
Date: 4th November 2024, 3:00 pm (CET)
Self-supervised reinforcement learning: algorithms and emergent properties
About the speaker: Benjamin Eysenbach is an Assistant Professor of Computer Science at Princeton University, where he runs the Princeton Reinforcement Learning Lab. His research focuses on foundational RL algorithms: AI methods that learn how to make intelligent decisions from trial and error. His past research has built state-of-the-art algorithms with fewer hyperparameters than prior work. His work draws inspiration from different branches of machine learning and probabilistic inference. One area of particular focus is self-supervised RL methods, which enable agents to learn intelligent behaviors without reward labels or expert demonstrations. Prior to joining Princeton, he received his PhD in machine learning from Carnegie Mellon University, worked at Google AI, and studied math as an undergraduate at MIT.
Abstract: In this talk, I will discuss recent work on self-supervised reinforcement learning, focusing on how we can learn complex behaviors without the need for hand-crafted rewards or demonstrations.
I will introduce contrastive RL, a recent line of work that can extract goal-reaching skills from unlabelled interactions. This method will serve to highlight how there is much more to self-supervised RL than simply adding an LLM or VLM to an RL algorithm; rather, self-supervised RL can be seen as a form of generative AI itself. I will also share some recent work on blazing-fast simulators and new benchmarks, which have accelerated research in my group. Finally, I’ll discuss emergent properties in self-supervised RL: preliminary evidence that we have found, and hints for where to go searching for more.