Concept generalization in visual representation learning - Naver Labs Europe

Concept generalization in visual representation learning

Mert Bulent Sariyildiz1,2Yannis Kalantidis1Diane Larlus1, Karteek Alahari2

1 NAVER LABS Europe            2 Inria

Cog Benchmark

Measuring concept generalization, i.e., the extent to which models trained on a set of (seen) visual concepts can be used to recognize a new set of (unseen) concepts, is a popular way of evaluating visual representations, especially when they are learned with self-supervised learning. Nonetheless, the choice of which unseen concepts to use is usually made arbitrarily, and independently from the seen concepts used to train representations, thus ignoring any semantic relationships between the two.

In this paper, we argue that semantic relationships between seen and unseen concepts affect generalization performance and propose benchmark, a novel benchmark on the ImageNet dataset that enables measuring concept generalization in a principled way. Our benchmark leverages expert knowledge that comes from WordNet in order to define a sequence of unseen ImageNet concept sets that are semantically more and more distant from the ImageNet-1K subset, a ubiquitous training set. This allows us to benchmark visual representations learned on ImageNet-1K out-of-the box: we analyse a number of such models from supervised, semi-supervised and self-supervised approaches under the prism of concept generalization, and show how our benchmark is able to uncover a number of interesting insights.


  • 12 Dec 2020: We further evaluated three publicly available models (MoCHi, InfoMin Aug. and MEAL-v2) on CoG and we present results below. Until our benchmarking code is publicly available, feel free to contact us with suggestions on models that we should consider benchmarking them on CoG.
  • 10 Dec 2020: The first version of our paper is released on arXiv. We are working on releasing the code to reproduce the results presented in the paper and also to easily run the benchmark on your pretrained models.

Benchmark results

Below, we present the main results after evaluating state-of-the-art models on the CoG benchmark. We refer the readers to our paper for further analysis and observations.

In the first version of our paper, we evaluated 7 models on CoG:

  • Sup – Supervised classifier pretrained on ImageNet-1K, available in torchvision repository.
  • S-Sup – Semi supervised classifier, pretrained on YFCC100M then fine-tuned on ImageNet-1K.
  • S-W-Sup – Semi supervised classifier, pretrained on Instagram-1G then fine-tuned on ImageNet-1K.
  • MoCo-v2 – Self-supervised model pretrained on ImageNet-1K.
  • SimCLR-v2 Self-supervised model pretrained on ImageNet-1K.
  • BYOL – Self-supervised model pretrained on ImageNet-1K.
  • SwAV – Self-supervised model pretrained on ImageNet-1K.

We further present results for three publicly available models:

  • MoCHi – (NeurIPS 2020) Self-supervised, trained on ImageNet-1K.
  • InfoMin Aug. – (NeurIPS 2020) Self-supervised, trained on ImageNet-1K.
  • MEAL-v2 – (NeurIPS Workshops 2020) Supervised, distilled model, trained on ImageNet-1K.

Generalization to unseen concepts

Top-1 accuracy

Top-1 accuracy for each method (pretrained on IN-1K) using logistic regression classifiers. We train them on pre-extracted features for the concepts in IN-1K and our generalization levels (L1/2/3/4/5), with all the training samples, i.e., N = All. (a)-(c) are visualizations of the same results from different perspectives. (a), (b) and (c) show the absolute top-1 accuracy, performance relative to Sup and the drop in performance on the levels relative to IN-1K, respectively.

Generalization to unseen concepts with a few training examples


Alignment & uniformity scores


Clustering evaluation measures