NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration
Date: 16th September 2021, 10:00am CEST (GMT +02.00)
Leveraging transformers for effective and efficient retrieval
Abstract: While relatively recent, transformer architectures like BERT (Bidirectional Encoder Representations from Transformers) have already demonstrated their ability to substantially improve effectiveness across a wide range of text ranking tasks. The increased effectiveness of these approaches is not without drawbacks, however, and the architecture of these models both comes with significant efficiency costs and creates challenges for applying them to text ranking. In this talk I will describe where and how transformers can be leveraged for text ranking, discuss some of my work tackling the challenges of applying transformers to this task, and describe some of the remaining challenges and what they mean for applying transformers to text ranking today.
About the Speaker: Andrew Yates is an assistant professor at the University of Amsterdam, where he focuses on developing content-based neural ranking methods and leveraging them to improve search and downstream tasks. He has co-authored a variety of papers on pre-BERT and BERT-based neural ranking methods as well as an upcoming book on transformer-based ranking methods. In 2016 Yates received his Ph.D. in Computer Science from Georgetown University, where he worked on information retrieval and information extraction in the medical domain.