NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration
Date: 13th October 2022, 10:00 am (CEST)
Scalable trustworthy AI – beyond “what”, towards “how”
About the speaker: I have started as an independent group leader at the University of Tübingen leading the group on Scalable Trustworthy AI (STAI) in 2022. I am interested in training trustworthy models (e.g. explainable, robust, and probabilistic models) and obtaining the necessary human supervision and guidance in a cost-effective way. I have been a research scientist at Naver AI Lab (2018-2022). I did my PhD in computer vision and machine learning at MPI-INF with Bernt Schiele and Mario Fritz (2014-2018). I did my masters and bachelors studies in Maths at the University of Cambridge (2010-2014).
Abstract: ML models are not trustworthy often because it’s focusing too much on “what” than “how”. That is, they care only about whether they are solving the task at hand (“what”) but not so much about solving it right (“how”). Having recognised this issue, the ML field has been shifting its focus from “what” to “how” for the last five years. Arguably, the most common approach to address “how” is to extend the familiar *benchmarking approach* that used to work well for the “what” phase: build a benchmark dataset and perform “fair” comparisons by fixing the allowed ingredients. This encourages more and more complex tricks that are likely to simply overfit to the given benchmark (e.g. ImageNet). However, for the “how” problem, I believe it is more important to look for *new types of ingredients* towards the “how” problem. This will make the fair comparison harder, but I believe this is the only way to make the “how” problem solvable at all. I will give an overview of my previous search for such ingredients that make models more explainable and more robust to distribution shifts. I will then discuss exciting future sources of such ingredients.