BLOG

6 October 2025
blank

Multimodal, multilingual and trustworthy AI models and resources for XR

Our contributions to the EU UTTER project including models, data, resources and publications.
2025
18 June 2025
blank

Efficient online text compression for RAG

Making RAG faster and leaner with new online compression techniques. OSCAR is the first fast and accurate soft online compression model for fine-tuned LLMs whilst PROVENCE, is a plug-and-play hard prompt model for any LLM.
2025
30 August 2024
blank

NAVER LABS Europe @Interspeech 2024

Overview of our presence at Interspeech 2024 with links to papers, models and demo.

2024
6 May 2024
blank

NAVER LABS Europe @ICLR 2024

An overview of the 6 papers we're presenting at ICLR in Vienna, Austria. In all, NAVER has 20 accepted papers, of which three are spotlights.

2024
3 April 2024
blank

NAVER LABS Europe at ICASSP 2024

The work we're presenting this year in multi-task, multimodal and multilingual speech processing technologies including some results made available to the community. We’re also co-organizing the popular Self-supervision in Audio Speech and Beyond (SASB) workshop.
2024
5 July 2023
blank

disco: a toolkit for controlling language models and other generative models

disco an open source toolkit for controlling language models and other generative models.
2023
20 April 2021
blank

NAVER LABS Europe @ EACL2021

Papers and activities of NAVER LABS Europe at EACL 2021. NLG, scaling to larger context, cost of supervised data, controlling the output plus ASR at the AfricaNLP workshop. 
2021
17 April 2021
blank

A scalable Transformer architecture for summarizing long documents

Our Global BERT-based Transformer architecture fuses global and local information at every layer, resulting in a reading comprehension model that achieves a deeper understanding for long documents and enables flexibility for downstream tasks.
2021
19 July 2019
Adrenaline_AiRS_home_image

Improving news recommendation with hybrid algorithms

A combination of collaborative filtering approaches and contextual information can help overcome unpredictable behavior while taking into account position and layout bias for more effective recommendation.
2019

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

blank

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.

blank