Lifelong Representation Learning
MIAI Chair
NAVER LABS Europe is leading a chair on Lifelong Representation Learning within the French national AI institute MIAI.
MIAI Grenoble Alpes (Multidisciplinary Institute in Artificial Intelligence) aims to conduct research in artificial intelligence at the highest level, to offer attractive courses for students and professionals of all levels, to support innovation in large companies, SMEs and startups and to inform and interact with citizens on all aspects of AI.
NAVER LABS Europe leads the research chair on Lifelong Representation Learning, one of the three chairs of the machine learning and reasoning line of research of the MIAI institute.
Lifelong Representation Learning
Given a set of problems to solve, the dominant paradigm in the AI community has been to solve each problem or task independently. This is in sharp contrast with the human capability to build from past experience and transfer knowledge to speed-up the learning process for a new task. To mimic such a capability, the machine learning community has introduced the concept of continual learning or lifelong learning. The main advantage of this paradigm is that it enables learning with less data, it often allows to learn faster and to generalize better. From an industrial standpoint, the potential of lifelong learning is tremendous as this would mean deploying machine learning models faster by bypassing the need to collect and label.
Contributors:
- Yannis Kalantidis – Senior Research Scientist
- Diane Larlus – Principal Research Scientist
- Gregory Rogez – Senior Research Scientist
- Mert Bülent Sariyildiz – Research Scientist
Teaching
- ‘Découvrir l’intelligence artificielle par le jeu‘ (Action développement professionnel de la Maison pour la Science Alpes-Dauphiné). ‘Discover AI with games’ is an annual 2-day training session for high school teachers in the Grenoble area. Classes by NAVER LABS Europe scientists are given by Diane Larlus and Florent Perronnin.
Blog posts
-
Continual learning of visual representations without catastrophic forgetting: Using domain randomization and meta-learning, computer vision models forget less when exposed to training samples from new domains. Remembering is a crucial element in the deployment of self-driving cars and robots which interact in dynamic environments.
-
Improving self-supervised representation learning by synthesizing challenging negatives: Contrastive learning is an effective way of learning visual representations in a self-supervised manner. Pushing the embeddings of two transformed versions of the same image (forming the positive pair) close to each other and further apart from the embedding of any other image (negatives) using a contrastive loss, leads to powerful and transferable representations. We demonstrate that harder negatives are needed to facilitate better and faster learning for contrastive self-supervised learning and propose ways of synthesizing harder negative features, on-the-fly and with minimal computational overhead.
- Learning Visual Representations with Caption Annotations: A new modeling task masks tokens in image captions to enable mid-sized sets of captioned images to rival large-scale labelled image sets for learning generic visual representations.
- The short memory of artificial neural networks: a research overview of current work in lifelong learning.