Hard negative mixing for contrastive learning - Naver Labs Europe
loader image

Abstract

Contrastive learning has become a key component of self-supervised learning (SSL) approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation, which provides models with {\em diverse examples}, is crucial in learning such representations. At the same time, data mixing strategies either at the image or the feature level improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, \ie the effect of hard negatives, has not been studied nor exploited enough. To get more meaningful negative samples, current top contrastive SSL approaches either substantially increase the batch sizes, or keep very large memory banks despite the fact that increasing their size leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing approaches, we propose several {\em hard negative mixing} strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on standard linear classification and object detection tasks and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art SSL method.

Yannis Kalantidis
Yannis Kalantidis
I have been a research scientist at Naver Labs Europe since March 2020; I am a member of the Computer Vision team. My research interests include representation learning, video understanding, multi-modal learning and large-scale vision and language. For a full list of publications please visit my Google Scholar profile: https://scholar.google.com/citations?user=QJZQgN8AAAAJ&hl=en or my personal website: https://www.skamalas.com/ I am very passionate about making the research community tackle more socially impactful problems. Together with Laura Sevilla-Lara, we lead the Computer Vision for Global Challenges initiative - please visit https://www.cv4gc.org/ for more info. I grew up and lived in Greece until 2015 with brief breaks in Sweden, Spain and the United States. I lived in the Bay Area from 2015 till 2020, working as a research scientist at Yahoo Research (2015-2017) and Facebook AI (2017-2020). I got my PhD in late 2014 from the National Technical University of Athens under the supervision of Yannis Avrithis. I am passionate about traveling, photography, film, interactive visual arts and music.