NAVER LABS Europe seminars are open to the public. This seminar is virtual and requires registration
Date: 15th September 2022, 10:00 am (CEST)
Disentangled representations to achieve fairness in NLP
Abstract: The impressive performances of Large Language Models on various NLP tasks should not make us forget that typical words embedding reproduce many of our societal gender and racial biases. The latter induce many disparities of treatment across subgroups of the population. The systematic study, detection and mitigation of these biases is instrumental to construct reliable and trustworthy systems. In this presentation, I will provide a gentle introduction to fairness and then dive into the details of an ongoing work to achieve fairness for some NLP tasks. The latter leverages ideas from Information Theory and contrastive learning to construct embedding that are disentangled from a fixed protected attribute.
About the speaker: Nathan Noiry is COO and co-founder of althiqa, the startup that makes reporting and communication about AI simple and trustworthy for all stakeholders. He graduated from ENS Lyon in mathematics and holds a PhD in probability theory (random matrices and random graphs). For the past two years, Nathan was a post-doctoral researcher at Telecom Paris, working on several aspects of trustworthy AI for the industry (biases, fairness, robustness…). LinkedIn page of althiqa and Nathan’s webpage