Lifelong Representation Learning
Naver Labs Europe is leading a chair on Lifelong Representation Learning as part of the MIAI institute.
MIAI Grenoble Alpes (Multidisciplinary Institute in Artificial Intelligence) aims to conduct research in artificial intelligence at the highest level, to offer attractive courses for students and professionals of all levels, to support innovation in large companies, SMEs and startups and to inform and interact with citizens on all aspects of AI.
Lifelong Representation Learning
Given a set of problems to solve, the dominant paradigm in the AI community has been to solve each problem or task independently. This is in sharp contrast with the human capability to build from past experience and transfer knowledge to speed-up the learning process for a new task. To mimic such a capability, the machine learning community has introduced the concept of continual learning or lifelong learning. The main advantage of this paradigm is that it enables learning with less data, it often allows to learn faster and to generalize better. From an industrial standpoint, the potential of lifelong learning is tremendous as this would mean deploying machine learning models faster by bypassing the need to collect and label.
- Yannis Kalantidis – Senior Research Scientist
- Diane Larlus – Principal Research Scientist
- Florent Perronnin – Strategy and Projects
- Gregory Rogez – Senior Research Scientist
- Mert Bülent Sariyildiz – PhD Student
- Riccardo Volpi – Research Scientist
Continual learning of visual representations without catastrophic forgetting: Using domain randomization and meta-learning, computer vision models forget less when exposed to training samples from new domains. Remembering is a crucial element in the deployment of self-driving cars and robots which interact in dynamic environments.
Improving self-supervised representation learning by synthesizing challenging negatives: Contrastive learning is an effective way of learning visual representations in a self-supervised manner. Pushing the embeddings of two transformed versions of the same image (forming the positive pair) close to each other and further apart from the embedding of any other image (negatives) using a contrastive loss, leads to powerful and transferable representations. We demonstrate that harder negatives are needed to facilitate better and faster learning for contrastive self-supervised learning and propose ways of synthesizing harder negative features, on-the-fly and with minimal computational overhead.
- Learning Visual Representations with Caption Annotations: A new modeling task masks tokens in image captions to enable mid-sized sets of captioned images to rival large-scale labelled image sets for learning generic visual representations.
- The short memory of artificial neural networks: a research overview of current work in lifelong learning.
- Concept generalization in visual representation learning. Mert Bulent Sariyildiz, Yannis Kalantidis, Diane Larlus, Karteek Alahari. International Conference on Computer Vision (ICCV), 2021
- Learning in a changing environment: memory strategies for streaming learning under distributional shifts. Riccardo Volpi, Cesar de Souza, Yannis Kalantidis, Diane Larlus, Gregory Rogez. Findings of the Workshop on Continual Learning in Computer Vision (CLVISION) at the Conference on Computer Vision and Pattern Recognition (CVPR), 2021
- Continual adaptation of visual representations via domain randomization and meta-learning. Riccardo Volpi, Diane Larlus, Gregory Rogez. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021. (oral)
- Learning Visual Representations with Caption Annotations. Mert Bülent Sariyildiz, Julien Perez, Diane Larlus, European Conference on Computer Vision (ECCV), 2020
- Hard Negative Mixing for Contrastive Learning. Yannis Kalantidis, Mert Bulent Sariyildiz, Noé Pion, Philippe Weinzaepfel, Diane Larlus, Neural Information Processing Systems conference (NeurIPS), 2020.