CODE & DATA

Data, code and models released by NAVER LABS Europe

DEBiT (Dual Encoder Binocular Transformer)

Correspondence Pretext Tasks for Goal-oriented Visual Navigation

An end-to-end trained agent for image goal navigation. Accompanies ICLR24 paper End-to-End (Instance)-Image Goal Navigation through Correspondence as an Emergent Phenomenon.

.

MASt3R

The latest version of breakthrough 3D model DUSt3R

Based upon the breakthrough framework, DUSt3R, MASt3R provides metric 3D reconstruction and dense local feature maps capable of handling thousands of images.

ELITR-Bench

A benchmark for the evaluation of long-context LLMs on meeting transcripts.

The meeting data used in this benchmark originally comes from the ELITR dataset. This dataset and experiments are described in the paper and are an output of the EU UTTER project.

Pasero

Lightweight Pytorch framework for training and running text generation models.

Can be used for machine translation, speech translation, language modeling and dialogue supporting a number of popular pre-trained models.

mHuBERT-147

The first general-purpose massively multilingual HuBERT speech representation model.

A promising compact model for speech processing pipelines, offering an unprecedented balance between high performance and parameter efficiency. Developed within the the EU UTTER project.

DUSt3R: Dense and Unconstrained Stereo 3D Reconstruction

3D reconstruction models made easy

3D reconstruction and visual localization with no user intervention and no priors using only a few images.

DISCo

DIStributional Control of LLMs

A toolkit for controlling language models and other generative models.

CroCo

Cross-view Completion for 3D vision

Unsupervised representation learning task trained from pairs of images showing the same scene from different viewpoints.

Zero-shot task generalization (models)

Multitask prompted training: models (BLOOM BigScience).

These prompted datasets to benchmark the ability of a model to perform completely unseen tasks specified in natural language.

Zero-shot task generalization (prompts)

Multitask prompted training: prompts (BLOOM BigScience).

These prompted datasets to benchmark the ability of a model to perform completely unseen tasks specified in natural language.

Generative Distribution Control (GDC)

Debiasing large pretrained language models using distributional control.

A general framework for imposing constraints on samples of pretrained language models

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.