NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration.
Date: 25th March 2021, 11:00 am (GMT +01.00)
A tale of three implicit planners and the XLVIN agent
Speaker: Petar Velickovic is a senior research scientist at DeepMind. He has a PhD in Computer Science from the University of Cambridge (Trinity College), obtained under the supervision of Pietro Liò. His research interests involve devising neural network architectures that operate on nontrivially structured data (such as graphs), and their applications in algorithmic reasoning and computational biology. He has published relevant research at both machine learning venues (NeurIPS, ICLR) and biomedical venues and journals (Bioinformatics, PLOS One, JCB, PervasiveHealth). In particular, Petar is the first author of Graph Attention Networks – a popular convolutional layer for graphs—and Deep Graph Infomax – a scalable local/global unsupervised learning pipeline for graphs. His research has been used in substantially improving the travel-time predictions in Google Maps. Currently, his main research interests are on graph representation learning for algorithmic reasoning. This new and exciting direction seeks to understand and employ the expressive power of GNNs for modelling classical algorithms.
Abstract: Deep reinforcement learning (RL) is one of the most active topics in today’s machine learning landscape. While it offers a remarkably powerful and general blueprint for solving generic problems, it is often data-hungry, often requiring observations of millions of transitions before meaningful behavioural patterns start to emerge. Conversely, model-based planning techniques provide an avenue to alleviate much of this data requirement by building explicit models of the environment and then performing reasoning (“planning”) within these models. While we have algorithms guaranteed to converge to optimal behaviour in such models, making sure that we are solving for the right environment is often non-trivial and prone to error.