Friday PM, 28th August 2020
Part 1: Mathieu Salzmann: Basic Concepts and Traditional Methods [PDF slides] [video]
Part 2: Gabriela Csurka: Visual DA in Deep Learning Era [PDF slides] [video]
Part 3: Tatiana Tommasi (45 min): Beyond Classical Domain Adaptation [PDF slides] [video]
Part 4: Timothy M. Hospedales (45 min): Domain Adaptation for Visual Applications: Perspectives and Outlook [PDF slides] [video]
Session 1
Session 2
Gabriela Csurka is a Principal Scientist at NAVER LABS Europe, France. Her main research interests are in computer vision for image understanding, multi-view 3D reconstruction, visual localization, multi-modal information retrieval as well as domain adaptation (DA) and transfer learning. She has contributed to around 100 scientific communications, several on the topic of DA, received the best paper award at the Transferring and Adapting Source Knowledge in Computer Vision Workshop (TaskCV) in 2016, participated with success in DA related challenges (ImageClefDA’14, VisDA’17), and has given invited talks on domain adaptation (ACIVS’15, Task-CV’17, OpenMIC’18 and Task-CV’19). In 2017 she edited a book on Domain Adaptation in Computer Vision Applications.
Timothy M. Hospedales is a Reader at University of Edinburgh; Principal Scientist at Samsung AI Research Centre, Cambridge; and Alan Turing Institute Fellow. His research focuses on lifelong machine learning, broadly defined to include multi-domain/multi-task learning, domain adaptation, transfer learning and meta-learning; with applications including computer vision, vision and language, reinforcement learning for control and finance. He has co-authored numerous papers on domain adaptation, domain generalisation, and transfer learning in major venues including CVPR, ICCV, ECCV, ICML and AAAI. He teaches computer vision at Edinburgh University and has given invited talks on these topics at Task-CV and Deep Learning in Finance Summit as well as tutorials at ACM Multimedia and several Summer Schools.
Mathieu Salzmann is a Senior Researcher at EPFL, with a broad expertise in Computer Vision and Machine Learning. He has published several articles at major conferences and journals on the topic of Domain Adaptation, and has contributed a chapter on the topic of matching distributions in G. Csurka’s book on Domain Adaptation in Computer Vision Applications. Furthermore, he has been invited to present his domain adaptation work at various venues, including the Workshop on Domain Adaptation and Few-Shot Learning, the University of Oxford and ETHZ.
Tatiana Tommasi is an Assistant Professor at Politecnico di Torino, Italy and an affiliated researcher at the Italian Institute of Technology. She pioneered the area of transfer learning for computer vision and has large experience in domain adaptation, generalization and multimodal learning with applications for robotics and medical imaging. Tatiana received the best paper award at the 1st edition of Task-CV workshop at ECCV’14 and since then she has been leading the organization of the following workshop editions. She also organized a workshop on similar topics at NIPS’13,’14 and taught a tutorial at ECCV’14.
While huge volumes of unlabeled data are generated and made available in many domains, the cost of acquiring data labels remains high. On the other hand, solving problems with deep neural networks has become extremely popular, however current methods typically rely on massive amounts of labeled training data to achieve high performance. To overcome the burden of annotation, solutions have been proposed in the literature to exploit available unlabeled data from the same domain, referred to as semi-supervised learning; and to exploit labeled data or trained models available in similar, yet different domains, referred to as domain adaptation. The focus of this tutorial will be on the latter. Domain adaptation is also of increasing societal importance as vision systems are deployed in mission critical applications whose predictions have real-world impact, but where real-world testing data statistics can differ significantly from lab collected training data. Our aim will be to give an overview of visual domain adaptation methods, a field whose popularity in the computer vision community has increased significantly in the last few years, as attested by the proliferation of DA-related papers published during the last years in top-ranked computer vision and machine learning conferences.
1. In the first part, we will present domain adaptation from a theoretical point of view. In particular, we will define the domain shift problem and illustrate its importance in computer vision. We will then introduce different ways to measure the distribution mismatch between two domains, referred to as the source and target domains. Specifically, we will review different distance metrics between probability distributions, such as the Maximum Mean Discrepancy and the Hellinger distance, as well as different ways to represent the source and target data, for instance using subspaces, so as to compare them. We will further explain how these techniques have been used in the past to design domain adaptation algorithms. In this context, we will first review the main historical contributions in domain adaptation, and then briefly study how these contributions have been translated to deep networks.
2. In the second part of the tutorial we will first discuss and compare different domain adaptation strategies that exploit deep architectures for visual recognition, such shallow models used with pretrained or fine-tuned deep features and deep architectures designed for domain adaptation. Then, we will overview recent trends in domain adaptation, including deep discriminative models with various discrepancy based and adversarial based losses, generative 2 and encoder-decoder based models, network parameter adaptation methods, semi-supervised and curriculum learning based models. We will present methods proposed in the literature for image classification, semantic segmentation, object detection and others.
3. In the third part, we will discuss all those particular cases that differ from the standard domain adaptation setting: source and target data may not cover exactly the same set of classes (partial, open-set), the target data may come as an online stream rather than being available altogether (continuous), several sources may be provided with different annotations levels (deep-cocktail, multi-source, predictive domain adaptation). Finally, domain generalization is the most challenging condition where no target data is available at training time. We will also relate domain adaptation and generalization with self-supervised learning.
4. The tutorial will conclude with an ending part dedicated to unifying perspectives and outlook. We will present deep tensor methods and meta-learning methods that provide frameworks to link domain adaptation and domain generalisation with related research topics including multi-task/multi-domain learning and few-shot learning. We will draw connections to related issues such as adversarial robustness, and further applications such as SBIR, VQA and deep RL.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.