Lifelong Representation Learning

Naver Labs Europe is leading a chair on Lifelong Representation Learning as part of  the MIAI institute.

MIAI Grenoble Alpes (Multidisciplinary Institute in Artificial Intelligence) aims to conduct research in artificial intelligence at the highest level, to offer attractive courses for students and professionals of all levels, to support innovation in large companies, SMEs and startups and to inform and interact with citizens on all aspects of AI.

MIAI

NAVER LABS Europe leads the research chair on Lifelong Representation Learning, one of the three chairs of the machine learning and reasoning line of research of the MIAI institute.

Lifelong Representation Learning

Given a set of problems to solve, the dominant paradigm in the AI community has been to solve each problem or task independently. This is in sharp contrast with the human capability to build from past experience and transfer knowledge to speed-up the learning process for a new task. To mimic such a capability, the machine learning community has introduced the concept of continual learning or lifelong learning. The main advantage of this paradigm is that it enables learning with less data, it often allows to learn faster and to generalize better. From an industrial standpoint, the potential of lifelong learning is tremendous as this would mean deploying machine learning models faster by bypassing the need to collect and label.

Diane Larlus - Linkmedia SpeaksScience

“Lifelong visual representation learning”, Diane Larlus – Linkmedia SpeaksScience, Inria

Contributors:

Blog posts

  • Continual learning of visual representations without catastrophic forgetting: Using domain randomization and meta-learning, computer vision models forget less when exposed to training samples from new domains. Remembering is a crucial element in the deployment of self-driving cars and robots which interact in dynamic environments.

  • Improving self-supervised representation learning by synthesizing challenging negatives: Contrastive learning is an effective way of learning visual representations in a self-supervised manner. Pushing the embeddings of two transformed versions of the same image (forming the positive pair) close to each other and further apart from the embedding of any other image (negatives) using a contrastive loss, leads to powerful and transferable representations. We demonstrate that harder negatives are needed to facilitate better and faster learning for contrastive self-supervised learning and propose ways of synthesizing harder negative features, on-the-fly and with minimal computational overhead.

  • Learning Visual Representations with Caption Annotations: A new modeling task masks tokens in image captions to enable mid-sized sets of captioned images to rival large-scale labelled image sets for learning generic visual representations.
  • The short memory of artificial neural networks: a research overview of current work in lifelong learning.

Publications

Teaching

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

blank

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.

blank