New online compression models make Large Language Models (LLMs) more accurate without the overhead.
Retrieval-Augmented Generation (RAG) is a technique that lets LLMs pull in relevant documents to ground their responses in real-world data. While this makes answers better, it also makes them slower and more expensive to generate because the retrieved documents incur extra data processing for each request. In this article we describe how we’ve been tackling the challenge of making RAG faster and leaner with new online compression techniques and available models.
LLMs increasingly rely on RAG to improve reliability but feeding an LLM 10 to 50 documents of context can make even basic question answering roughly 4 to 5 times slower making it expensive and impractical for real-time use. To address this, the research community, including our team, has been developing methods to compress these augmented prompts, preserving essential information while drastically reducing their length to enable accurate and efficient responses.
Figure 1: Comparison of an LLM receiving a prompt with no RAG vs with RAG. On the top line, where there is no RAG, the prompt (approximately a dozen words), goes straight to the LLM. In the line below we see how RAG increases the length of the prompt with the extra information from the external documents it has pulled in.
Compression methods for RAG typically fall into two axes: hard vs. soft prompts and online vs. offline processing.
Hard prompts are compressed by selecting key text snippets i.e. exact sentences or phrases extracted from documents that have been retrieved. This is called pruning. Soft prompt compression encodes semantic meaning as embeddings. Soft prompts are typically shorter, more abstract and carry more information than hard prompts. This difference is illustrated in Figure 2 below.
Figure 2: A comparison of an LLM prompt with RAG and no compression, with RAG that has hard prompt compression and with RAG that has soft prompt compression.
As the name suggests, offline methods pre-processes documents before the user query is known which can limit their their effectiveness by relying on generic rather than query-specific context. In contrast, online methods perform compression on-the-fly, tailoring the context to each question. This typically improves accuracy as the most relevant and up-to-date information is used but it needs to be fast to be practical in real-time scenarios.
In general, a good compression strategy should make the input prompt as small as possible, preserve factual accuracy and introduce minimal overhead. These competing goals mean one often has to make some kind of trade-off.
Several recent works have tackled compression in RAG including hard compression approaches like RECOMP [1] and LLMLingua [2,3] and soft methods such as In-Context Autoencoders (ICAE) [4], KV compression [5] and DODO [6], yet they tend suffer from one of two key limitations: they either degrade answer quality or are too slow for deployment in real-world systems.
We’ve been working to develop online methods that address these issues simultaneously. In the plot below, you can see how the different models are positioned. Our models, OSCAR and PROVENCE, are designed to strike a better balance between accuracy and efficiency.
Figure 3: An overview of various RAG compression models, categorized by online/offline and hard/soft strategies. OSCAR and PROVENCE are the compression models introduced in this article.
PROVENCE is our efficient hard-prompt online compression method that prunes irrelevant sentences while preserving answer quality. It selects only the sentences from retrieved documents that are most relevant to the question, reducing input size and context noise. It works in a plug and play fashion with any LLM.
Unlike prior methods that work at the sentence level, PROVENCE encodes all sentences in a document alongside the user question. Working at the document-level means PROVENCE captures coreferences and provides more accurate compression (see Figure 3).
Moreover, PROVENCE streamlines the pipeline by integrating reranking and compression into a single process. Reranking is where the documents retrieved are assigned relevance scores to reorder them by pertinence. In this unified step, documents are scored for relevance and compressed simultaneously, eliminating the need for separate computations and significantly reducing overhead. The compression has basically no cost. This efficiency is further enhanced by the use of a lightweight 300M-parameter model for cross-encoding and reranking instead of a full-scale LLM which makes inference fast.
Figure 4: An illustration of how the prompt is created using the hard online RAG compression method PROVENCE.
Details on PROVENCE are described in the technical blog and paper [8] and the model is available on HuggingFace.
Soft prompts are attractive because they can give higher compression rates compared to the hard compression of raw text. Unfortunately, early attempts at soft compression have suffered from either a drop in accuracy in the final output, or from slow inference (because you need to call an LLM to compress the embeddings…). These issues meant most soft compression methods were offline [4,7].
To address these challenges we took a step by step approach to release our most recent method, OSCAR (Online Soft Compression and Reranking), the first online soft-prompt compression model. OSCAR uses distilled embeddings for faster inference* [i]and higher compression.
Our first step in developing OSCAR was to demonstrate that it was important that the LLM ‘get’ compressed embeddings. We did this by fine-tuning the LLM with an adaptor to understand the language of compression. The accuracy with this offline model was better [7] – but not to the level of the baseline LLM. To improve the consistency in accuracy, we introduced an existing technique to train offline LLMs to give the same answer whether or not the input is compressed [9]. Finally, we designed the OSCAR model to make it efficient by using only the first few layers of the LLM or small dedicated LLM compressors.
You can see how OSCAR works in Figure 4 and there are more details in the paper [10].
Figure 5: An illustration of how the OSCAR soft online compression model works.
OSCAR is model-specific and requires fine-tuning the LLM but the results are worth it by providing the following:
1] online compression rates that are higher than any existing ones and even better than the hard-prompt models like PROVENCE
2] comparable or better accuracy than other offline or online methods and
3] lower latency for the same or better quality output.
RAG is essential in enhancing LLMs with retrieved external knowledge to improve their accuracy and trustworthiness but the scalability of RAG is constrained by the associated high computational costs. Different document compression methods have emerged to address this problem, categorised into online and offline and hard and soft compression. Online methods perform better than offline but need to be fast. Hard compression is LLM-agnostic but the gain in compression efficiency is more limited than in soft methods which require fine-tuning the LLM. Compression methods can be applied to domains other than RAG such as robotics for smoother human robot interaction or when robots need to efficiently reason on their past experience. Here we’ve released several complementary online compression models that improve RAG, in particular our PROVENCE and OSCAR models. OSCAR is the first soft online compression model which makes it the fastest (once the LLM has been fine-tuned) while plug-and-play PROVENCE is the easiest to use. We invite you to try out both models and tell us what you think!
This research was carried out by all the authors above and in collaboration with David Rau (University of Amsterdam, now at Cohere), and Shuai Wang (Queensland University). Special thanks to our collaborators and to Hugging Face for hosting our models.
1: RECOMP: Improving Retrieval-Augmented LMs with Context Compression and Selective Augmentation, Xu, Fangyuan et al., International Conference on Learning Representations (ICLR), 2024.
2: LLMLingua: Compressing Prompts for Accelerated Inference of Large Language Models, Jiang, Huiqiang et al., Conference on Empirical Methods in Natural Language Processing (EMNLP), 2023.
3: LLMLingua-2: Data Distillation for Efficient and Faithful Task-Agnostic Prompt Compression, Pan, Zhuoshi et al., Findings of the Annual Meeting of the Association for Computational Linguistics (ACL) 2024.
4: In-context Autoencoder for Context Compression in a Large Language Model, Ge, Tao et al., International Conference on Learning Representations (ICLR), 2024.
5: TurboRAG: Accelerating Retrieval-Augmented Generation with Precomputed KV Caches for Chunked Text, Lu, Songshuo et al., arXiv:2410.07590v1, 2024.
6: Dodo: Dynamic Contextual Compression for Decoder-only LMs, Qin, Guanghui et al. Annual Meeting of the Association for Computational Linguistics (ACL), 2024.
7: Context Embeddings for Efficient Answer Generation in RAG, Rau, David et al., The 18th International Conference on Web Search and Data Mining (WSDM), 2025.
8: Provence: efficient and robust context pruning for retrieval-augmented generation, Chirkova, Nadezhda et al., The 13th International Conference on Learning Representations (ICLR), 2025.
9: PISCO: Pretty Simple Compression for Retrieval-Augmented Generation, Louis, Maxime et al., Annual Meeting of the Association for Computational Linguistics (ACL), 2025 and ArXiv abs/2501.16075, 2025.
10: OSCAR: Online Soft Compression and Reranking, Louis, Maxime et al., ArXiv abs/2504.07109, 2025.
[1] * Inference is the process where a model uses its learned knowledge to generate a response to new input
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.