28 September 2022
blank

Highlights of Interspeech 2022

This blog article by Laurent Besacier covers the INTERSPEECH 2022 papers he was most interested in based on his current research interests in self-supervised speech models, scaling speech-to-text models, large language models and ASR architectures and end-to-end speech to text translation.
2022
16 June 2022
CVPR2022

NAVER @CVPR 2022

NAVER LABS Europe, together with colleagues from NAVER, LINE and WEBTOON will be in New Orleans. Visit the NAVER booth for job opportunities, internships, tech demos and more! 17 papers, workshop keynotes. 

2022
25 February 2022
blank

On multimodal speech-text pre-trained models

Multimodal pre-training has the potential of being a game changer in spoken language processing. In this blog, we review 3 recent papers on the topic published by Meta, Microsoft (and academic partners) and Google

2022
2 December 2021
blank

Deep regression on manifolds: a 3D rotation case study

Theoretical and experimental findings to improve regression applications: a 3D rotation case study. Code.

2021
29 November 2021
PoseBERT

PoseBERT

A novel, plug and play model for human 3D shape estimation of the body or hands, in videos which is trained by mimicking the BERT algorithm from the natural language processing community.
2021
4 November 2021
Towards high quality multilingual NMT in production

Towards high quality multilingual NMT in production

How to improve the inference speed of large Multilingual NTM models and do fast and parameter-efficient domain and language adaptation with them. Work published at EMNLP and WMT 2021.
2021
29 September 2021
Magnetic Sensor Based Localization

Magnetic sensor-based localization and deep learning

A novel approach to indoor localization that uses magnetic field data from smartphone sensors and deep learning
2021
30 August 2021
Learning Robot Manipulation Blog

Learning robot manipulation – modelling the reachable space of a robot and its inverse mapping

A new approach to learning few-shot imitation agents whereby you simply feed demonstrations of a new test task to the learned policy called DCRL. This new approach has several advantages.
2021
3 August 2021
EBM podcast 2021

Energy Based Models – Podcast

Podcast & edited transcript on Energy Based Models (EBMs). Guests, Hady Elsahar and Marc Dymetman work on EBMs in the field of natural language and co-organised the ICLR 2021 workshop on EBMs.
2021
15 July 2021
DCRL A new family of approaches to few-shot imitation

A new family of approaches to few-shot imitation

A new approach to learning few-shot imitation agents whereby you simply feed demonstrations of a new test task to the learned policy called DCRL. This new approach has several advantages.
2021
8 July 2021
blank

SPLADE – a sparse bi-encoder BERT-based model achieves effective and efficient first-stage ranking

A new sparse bi-encoder BERT-based model for effective and efficient first-stage ranking. The first to rival dense models.  
2021
21 June 2021
blank

Continual learning of visual representations without catastrophic forgetting

Using domain randomization and meta-learning, computer vision models forget less when exposed to training samples from new domains. Remembering is a crucial element in the deployment of self-driving cars and robots which interact in dynamic environments.
2021

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

blank

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.

blank