NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration.
Date: 15th December 2020, 10am (GMT +01.00)
Speaker: Stephan Günnemann is a Professor at the Department of Informatics, Technical University of Munich and Acting Director of the Munich Data Science Institute. His main research focuses on how to make machine learning techniques reliable, thus, enabling their safe and robust use in various application domains. Prof. Günnemann is particularly interested in studying machine learning methods targeting complex data domains such as graphs/networks and temporal data. His works on subspace clustering on graphs as well as adversarial robustness of graph neural networks have received the best research paper awards at ECML-PKDD and KDD.
Stephan acquired his doctoral degree at RWTH Aachen University, Germany in the field of computer science. From 2012 to 2015 he was an associate of Carnegie Mellon University, USA. Stephan has been a visiting researcher at Simon Fraser University, Canada, and a research scientist at the Research & Technology Center of Siemens AG. In 2017 he became Junior-Fellow of the German Computer Science Society. Stephan has been a (senior) PC member/area chair at conferences including NeurIPS, ICML, KDD, ECML-PKDD, AAAI, WWW.
Abstract: Graph neural networks have achieved impressive results in various graph learning tasks and they have found their way into many applications such as molecular property prediction, cancer classification, fraud detection, or knowledge graph reasoning. Despite their proliferation, studies of their robustness properties are still very limited — yet, in domains where graph learning methods are often used the data is rarely perfect and adversaries (e.g. in the web) are common. Specifically, in safety-critical environments and decision-making contexts involving humans, it is crucial to ensure the GNNs reliability. In my talk, I will shed light on the aspect of robustness for state-of-the art graph-based learning techniques. I will highlight the unique challenges and opportunities that come along with the graph setting and I will showcase the methods vulnerabilities. Based on these insights, I will discuss different principles allowing us to certify robustness, giving us provable guarantees about the GNNs behavior, and ways to improve their reliability.