Hard Negative Mixing for Contrastive Learning - Naver Labs Europe

Hard Negative Mixing for Contrastive Learning

(NeurIPS 2020)

Overview

Contrastive learning has become a key component of self-supervised learning approaches for computer vision. By learning to embed two augmented versions of the same image close to each other and to push the embeddings of different images apart, one can train highly transferable visual representations. As revealed by recent studies, heavy data augmentation and large sets of negatives are both  crucial in learning such representations. At the same time, data mixing strategies either at the image or the feature level improve both supervised and semi-supervised learning by synthesizing novel examples, forcing networks to learn more robust features. In this paper, we argue that an important aspect of contrastive learning, i.e., the effect of hard negatives, has so far been neglected. To get more meaningful negative samples, current top contrastive self-supervised learning approaches either substantially increase the batch sizes, or keep very large memory banks; increasing the memory size, however, leads to diminishing returns in terms of performance. We therefore start by delving deeper into a top-performing framework and show evidence that harder negatives are needed to facilitate better and faster learning. Based on these observations, and motivated by the success of data mixing, we propose hard negative mixing strategies at the feature level, that can be computed on-the-fly with a minimal computational overhead. We exhaustively ablate our approach on linear classification, object detection and instance segmentation and show that employing our hard negative mixing procedure improves the quality of visual representations learned by a state-of-the-art self-supervised learning method.

hard negative mixing image

Figure 1. Illustration of MoCHi. The proposed approach generates synthetic hard negatives on-the-fly for each positive (query)

We refer to the proposed approach as MoCHi, that stands for “(M)ixing (o)f (C)ontrastive (H)ard negat(i)ves. A toy example of the proposed hard negative mixing strategy is presented in Figure 1. It shows a t-SNE plot after running MoCHi on 32-dimensional random embeddings on the unit hypersphere. We see that for each positive query (red square), the memory (gray marks) contains many easy negatives and few hard ones, i.e., many of the negatives are too far to contribute to the contrastive loss. We propose to mix only the hardest negatives (based on their similarity to the query) and synthesize new, hopefully also hard but more diverse, negative points (blue triangles).

News

Pre-trained models 

All pytorch checkpoints below are for a ResNet-50 backbone and trained using a modified version of the public MoCo codebase. Performance presented as reference, please check the Tables in the paper for more details, as well as variance (all numbers below are averaged over at least 3 runs). Not all models were tested on all datasets.

Model Epochs Im-1k (Top1-Acc) PASCAL VOC (AP50/AP/AP75) COCO (AP_bb/AP_mk)
MoCHi (128,1024,512) 100 63.4 81.1/54.7/60.9 37.8/33.2
MoCHi (512,1024,512) 63.4 81.3/54.7/60.6 37.4/33.0
Model Epochs Im-1k (Top1-Acc) PASCAL VOC (AP50/AP/AP75) COCO (AP_bb/AP_mk)
MoCHi (128,1024,512) 200 66.9 82.7/57.5/64.4 39.2/34.3
MoCHi (512,1024,512) 67.6 82.7/57.1/64.1 39.4/34.5
Model Epochs Im-1k (Top1-Acc) PASCAL VOC (AP50/AP/AP75) COCO (AP_bb/AP_mk)
MoCHi (128,1024,512) 800 68.7 83.3/57.3/64.2 TBD/TBD
MoCHi (512,1024,512) 69.2 82.6/56.9/63.7 TBD/TBD
Model Epochs Im-1k (Top1-Acc) PASCAL VOC (AP50/AP/AP75) COCO (AP_bb/AP_mk)
MoCHi (128,1024,512) 1000 69.8 83.4/58.7/65.9 TBD/TBD
MoCHi (512,1024,512) 70.6 83.2/58.4/65.5 TBD/TBD

BibTeX:

@InProceedings{kalantidis2020hard,
  author = {Kalantidis, Yannis and Sariyildiz, Mert Bulent and Pion, Noe and Weinzaepfel, Philippe and Larlus, Diane},
  title = {Hard Negative Mixing for Contrastive Learning},
  booktitle = {Neural Information Processing Systems (NeurIPS)},
  year = {2020}
}