BLOG

30 September 2024
blank

NAVER LABS Europe @ECCV 2024

We have a strong presence at this year's conference with invited talks, orals, posters, live demos and winning the the single frame map free relocalization challenge with 3D vision model MASt3R!

2024
16 June 2022
CVPR2022

NAVER @CVPR 2022

NAVER LABS Europe, together with colleagues from NAVER, LINE and WEBTOON will be in New Orleans. Visit the NAVER booth for job opportunities, internships, tech demos and more! 17 papers, workshop keynotes. 

2022
25 February 2022
blank

On multimodal speech-text pre-trained models

Multimodal pre-training has the potential of being a game changer in spoken language processing. In this blog, we review 3 recent papers on the topic published by Meta, Microsoft (and academic partners) and Google

2022
19 May 2021
Localization Datasets in Crowded Indoor Spaces

Releasing first of a kind large-scale localization datasets in crowded indoor spaces

NAVER LABS releases world's biggest visual localization dataset of indoor spaces with over 130K images. Dataset built with NAVER LABS mapping robots  M1X & COMET and available in unified data format kapture.
2021
30 July 2020
blank

kapture – A unified data format to facilitate visual localization and structure from motion.

Announcing 'kapture' the open source release of a new SfM and visual localization data format. Available for ECCV2020 workshop challenge on visual localization.
2020
27 May 2020
Covid-19 blog image

A machine translation model for Covid-19 research

We are releasing a state-of-the-art multilingual and multi-domain neural machine translation model specialised for biomedical data that enables translation into English from five languages (French, German, Italian, Spanish and Korean).
2020
29 January 2020
Virtual Kitti 2

Announcing Virtual KITTI 2

Latest release of the popular synthetic image dataset for training and testing. New features include increased photorealism, stereo cameras and additional ground truth.
2020
13 December 2019
blank

Towards understanding human actions out of context with the Mimetics dataset

This article introduces our recent arxiv preprint on further understanding human actions out of context thanks to the newly introduced Mimetics dataset.
2019

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

blank

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.

blank