Data, code and models released by NAVER LABS Europe


A novel, plug and play model for human 3D shape estimation in videos.

Model trained by mimicking the BERT algorithm from the natural language processing community.


Quantization-based 3D human motion generation and forecasting.

An auto-regressive transformer-based approach which internally compresses human motion into quantized latent sequences.


3D human poses from natural language.

A dataset pairing 3D human poses with both automatically generated and human-written descriptions.


A shallow multilingual machine translation model for low-resource languages.

Covers more than 10K language pairs, achieves competitive results with M2M-100 while being much smaller and faster.

Semantic segmentation (OASIS benchmark)

On the road to Online Adaptation for Semantic Image Segmentation (OASIS).

A Pytorch codebase for research to replicate the CVPR22 paper.

Single-step adversarial training (N-FGSM)

Make some noise: reliable and efficient single-step adversarial training.

Official repo for the NeurIPS 2022 paper.

Zero-shot task generalization (models)

Multitask prompted training: models (BLOOM BigScience).

These prompted datasets to benchmark the ability of a model to perform completely unseen tasks specified in natural language.

Zero-shot task generalization (prompts)

Multitask prompted training: prompts (BLOOM BigScience).

These prompted datasets to benchmark the ability of a model to perform completely unseen tasks specified in natural language.


NeuralDiff: Segmenting 3D objects that move in egocentric videos.

This repository contains the official implementation of the 3DV 2021 paper.

Generative Distribution Control (GDC)

Debiasing large pretrained language models using distributional control.

A general framework for imposing constraints on samples of pretrained language models

Large-scale localization indoor datasets

Large-scale localization datasets in crowded indoor spaces.

Five new indoor datasets with over 130K images.

NMT & Efficient Multilingual NMT

Code, model checkpoints, test sets and outputs for 4 multilingual NMT papers (EMNLP2021).

Publications concern efficient inference, continual learning, unsupervised NMT and domain adaptation.


Scene-Text Aware Cross-Modal Retrieval

Dataset that allows exploration of cross-modal retrieval where images contain scene-text instances.


Twin Learning for Dimensionality Reduction

A method that is simple, easy to implement and train and of broad applicability.

CoG benchmark

Concept generalization in visual representation learning.

Code repository for the ImageNet-CoG Benchmark introduced in the paper ICCV 2021 paper.

Kapture localization

A toolbox with various localization related algorithms (mapping, localization, benchmarking IR for visual localization).

Relies strongly on the kapture format for data representation and manipulation.


Multi-lingual & multi-domain translation model.

Model specialised for biomedical data.


Differentiable Cross Modal Model

Code implementing the model introduced in Learning to Rank Images with Cross-Modal Graph Convolutions (ECIR’20).


Distillation of Part Experts for whole-body 3D pose estimation in the wild.

A novel, efficient model for whole-body 3D pose estimation (including bodies, hands and faces),  trained by mimicking the output of hand-, body- and face-pose experts.


Progressive skeletonization

Method for extreme pruning of artificial neural networks at initialization.


A unified data format to facilitate visual localization and SfM.

Kapture is a file format as well as a set of tools for manipulating datasets, and in particular Visual Localization and Structure from Motion data.

LCR-Net release V2.0

Localization Classification Regression for human pose.

Improved pose proposals integration for multi-person 2D and 3D pose detection in natural images.


Ultra-minimal version of Lisp.

Implementation of fully fledged Lisp interpreter with Data Structure, Pattern Programming and High level Functions with Lazy Evaluation à la Haskell. Comes with editor from TAMGU.


Mixing of Contrastive Hard negatives.

Data mixing strategies that can be computed on-the-fly with minimal computational overhead, highly transferable visual representations.

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.


Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.