CODE & DATA

Data, code and models released by NAVER LABS Europe

UNIC

Universal Classification Models via Multi-teacher Distillation

General encoder for classification. Accompanies ECCV’24 paper.

SLACK

Stable Learning of Augmentations with Cold-start and KL regularization.

Learning augmentation policies without prior knowledge.

RELIS semantic segmentation

Reliability in semantic segmentation: are we on the right track?

A codebase to evaluate the robustness and uncertainty properties of semantic segmentation models as implemented in the CVPR 2024 paper.

T-REX

No reason for no supervision: improved generalization in supervised models.

Model for transfer learning.

Synthetic ImageNet clones

Fake it till you make it: learning transferable representations from synthetic ImageNet clones.

Two ResNet50 models pretrained on our synthetic ImageNet clones: ImageNet-100-SD or ImageNet-1K-SD.

ARTEMIS

Attention-based Retrieval with Text-Explicit Matching and Implicit Similarity.

An Explicit Matching module for compatibility and an Implicit Similarity module for relevance.

Learning super-features for image retrieval

A novel architecture for deep image retrieval

Code for running our FIRe model , based solely on mid-level features that we call super-features.

Neural feature fusion fields

3D distillation of self-supervised 2D image representations.

A method that improves dense 2D image feature extractors when the latter are applied to the analysis of multiple images reconstructible as a 3D scene.

Semantic segmentation (OASIS benchmark)

On the road to Online Adaptation for Semantic Image Segmentation (OASIS).

A Pytorch codebase for research to replicate the CVPR22 paper.

Single-step adversarial training (N-FGSM)

Make some noise: reliable and efficient single-step adversarial training.

Official repo for the NeurIPS 2022 paper.

StacMR

Scene-Text Aware Cross-Modal Retrieval

Dataset that allows exploration of cross-modal retrieval where images contain scene-text instances.

TLDR

Twin Learning for Dimensionality Reduction

A method that is simple, easy to implement and train and of broad applicability.

CoG benchmark

Concept generalization in visual representation learning.

Code repository for the ImageNet-CoG Benchmark introduced in the paper ICCV 2021 paper.

MOCHI

Mixing of Contrastive Hard negatives.

Data mixing strategies that can be computed on-the-fly with minimal computational overhead, highly transferable visual representations.

Deep image retrieval

End-to-end learning of deep visual representations for image retrieval.

Repository contains models and evaluation scripts of papers ‘End-to-end Learning of Deep Visual Representations for Image Retrieval’ & ‘Learning with Average Precision: Training Image Retrieval with a Listwise Loss’.

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.