NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration
Date: 9th June 2022, 4:00 pm (GMT +2:00)
Abstract: Could a purely self-supervised foundation model achieve grounded language understanding? I’ll venture a “Yes” answer provided the model is trained on rich multi-modal data capturing many aspects of our world. In supporting this answer, I’ll seek first to clarify the core concepts involve: foundation models, self-supervision, and grounded language understanding. Having done this, I can reduce our core question to the following: if a system succeeds at hard behavioural tasks in a domain and does so according to a correct causal model of that domain and the language used to describe it, then it has achieved grounded language understanding in that domain. Present-day models fall short of achieving this, but I’ll review extensive evidence that they are on a trajectory to achieving it.
About the speaker: Christopher Potts is Professor and Chair of Linguistics and Professor (by courtesy) of Computer Science at Stanford, and a faculty member in the Stanford NLP Group and the Stanford AI Lab. His group uses computational methods to explore topics in emotion expression, context-dependent language use, systematicity and compositionality, and model interpretability. This research combines methods from linguistics, cognitive psychology, and computer science, in the service of both scientific discovery and technology development. He is the author of the 2005 book The Logic of Conventional Implicatures as well as numerous scholarly papers in computational and theoretical linguistics.