NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration
Date: 3rd May 2023, 10:00 am (CEST)
Grounding large language models in interactive environments with online reinforcement learning
About the speakers: Thomas Carta (main speaker): I studied theoretical physics and applied mathematics at the Ecole Polytechnique and then at Cambridge. Now a PhD student since 2021, I am interested in the use of language in RL and in particular how structures in language and Large Language Models (LLMs) can help RL agents to explore the environment, learn more efficiently and generate their own goals.
Clément Romac: After studying software engineering and theoretical Computer Science, I am now a first year PhD student jointly supervised by Pierre-Yves Oudeyer (FLOWERS, Inria) and Thomas Wolf (Hugging Face) studying how autonomous Deep RL agents can leverage Large Language Models.
Olivier Sigaud: I’m a Professor in Computer Science and a machine learning researcher at ISIR, a robotics lab of Sorbonne University, Paris, France.
Abstract: Recent works successfully leveraged Large Language Models’ (LLM) abilities to capture abstract knowledge about world’s physics to solve decision-making problems. Yet, the alignment between LLMs’ knowledge and the environment can be wrong and limit functional competence due to lack of grounding. In this talk, we will present an approach to achieve this alignment through functional grounding: we consider an agent using an LLM as a policy that is progressively updated as the agent interacts with the environment, leveraging online Reinforcement Learning to improve its performance to solve goals. Using an interactive textual environment designed to study higher-level forms of functional grounding, and a set of spatial and navigation tasks, we study several scientific questions: 1) Can LLMs boost sample efficiency for online learning of various RL tasks? 2) How can it boost different forms of generalization? 3) What is the impact of online learning? We study these questions by functionally grounding several variants (size, architecture) of FLAN-T5.