Our Global BERT-based Transformer architecture fuses global and local information at every layer, resulting in a reading comprehension model that achieves a deeper understanding for long documents and enables flexibility for downstream tasks.
Quentin Grail, Julien Perez |
2021 |
Our Global BERT-based Transformer architecture fuses global and local information at every layer, resulting in a reading comprehension model that achieves a deeper understanding for long documents and enables flexibility for downstream tasks.
For computers, reading comprehension is a challenging task. Recent advances in neural network architecture have greatly improved the ability of machines to process and understand passages of written text. However, most models still fall short when it comes to understanding the content of longer documents. Successful approaches could enable the creation of automatic summaries from any input text, such as scientific papers or restaurant reviews. Other contexts in which such models could prove useful include question answering on long documents, fact checking and document retrieval.
These days, machine-learning techniques for natural language processing (NLP) generally rely on pretrained language models, such as BERT (Bidirectional Encoder Representations from Transformers) (1). These models have become essential building blocks in the development of deep-learning models for reading comprehension and have improved on state-of-the-art performance for an extensive collection of NLP tasks, such as question answering and sentiment analysis (2).
Most recent competitive architectures, including BERT (1, 3, 4), are based on stacked Transformer ‘encoder’ and ‘decoder’ layers (5). Transformers are a kind of deep-learning model that uses the mechanism of ‘attention’, where the model considers all words in the input text—paying particular attention to those that it deems important, which are assigned a greater ‘weight’—to create a context by which it can better understand the meaning of each subsequently processed word.
Although the attention mechanism of these Transformer layers is effective in terms of both performance and inference speed (2), it does bring with it some restrictions. First, Transformer layers are computationally expensive. The Transformer self-attention memory (i.e. the capacity required to process a given sentence, or sequence) increases quadratically with the number of words, or input tokens, in a document. This level of complexity makes it technically impossible to compute long documents. A second restriction is related to positional embedding (i.e. the weight that’s given to each word based on its position in the text). By design, the Transformer’s attention is independent of the word’s position in the input sequence, so most recent architectures use a trainable positional encoding layer. This means that, to ensure the positional information is encoded correctly, the maximum length of the input text to be processed must be defined at the pretraining stage, as it’s difficult to extend when the model is being fine-tuned on downstream tasks.
We aim to overcome these issues. Our objectives are to design a Transformer-based model that is capable of processing long documents; to make an architecture than can take full advantage of available pretrained language models; and to produce meaningful representations of tokens, contextualized in a full document, for downstream tasks.
To achieve our aims, we propose a new hierarchical structure that leverages pretrained Transformers to encode long documents, as depicted in Figure 1. The novelty of our approach lies in its interweaving of Transformer layers (that encode relatively short local context, usually around 30 words) with propagation layers (that spread the knowledge throughout the document). Formally, the input document is first divided into a sequence of blocks, which in this case correspond to sentences. The first layer of the Transformer then independently processes each block to build local representations of the tokens. Each block is represented by its first token. Then, the propagation layer—a bidirectional gated recurrent unit (BiGRU) neural network—processes the sequence of sentence representations for the entire document to spread this local knowledge. At this stage, the first token of each block (the classification token, denoted [CLS]) is a local representation of the sentence that is enriched with global information from the entire context (the rest of the document). We repeat this process of local encoding and knowledge propagation for all layers of the pretrained architecture.
Local Transformer functions are initialized from available pretrained language models, such as BERT or RoBERTa (a Robustly Optimized BERT Pretraining Approach) (3). Because we construct and propagate document-level information between the layers, global and local information is fused at every layer of the architecture rather than just at the top layer (6). We use sentences as local blocks because the lengths are acceptable for the pretrained Transformers. The top, output layer computes the final representations, which can be adapted to any downstream task. The output layer, which acts as a sentence classifier that aims to label each sentence as ‘selected’ or ‘unselected’ for the summary, is a feedforward neural network that acts on each sentence representation.
To evaluate the quality of our proposition, we use our model to extract a few sentences that best summarize the content of long documents from two summarization datasets (arXiv and PubMed, see Table 1 for details). This task—known as extractive summarization (7)—tests several skills that we consider necessary for reading comprehension models. We tackle it as a sentence-level classification problem, where each sentence is labelled according to whether it belongs to the summary.
To be effective, the model must be capable of two things. First, it should be able to understand long documents beyond the first few tokens. Second, the model needs to have a local understanding of each sentence, so that it’s capable of considering whether a sentence is meaningful or not, and a global understanding of the document, in order to produce a coherent summary.
Datasets | avg. doc length sentence | avg.doc length words | avg summary length sentences | avg summary length words |
---|---|---|---|---|
arXiv | 204 | 5038 | 5.6 | 165 |
PubMed | 88 | 3235 | 6.8 | 205 |
We use ROUGE scores to evaluate the summary extracted by our model against the abstract of the paper from which we obtained these datasets (8). The models we use for comparison include a BERT Ranker (9) that ranks each sentence independently without reading the whole document and BertSumExt (10), a Transformer-based model that processes the full paper at once and achieves state-of-the-art performance on extractive summarization of short documents. Because BertSumExt doesn’t scale for long inputs, we implement a sliding windows version (BertSumExt SW) where the input document is divided into multiple overlapping windows. Moreover, we compare our model against two recently proposed scalable Transformers called Reformer (11) and Longformer (12). Results are shown in Table 2 and additional comparisons are described in our paper (6).
PubMed | arXiv | ||||||||
---|---|---|---|---|---|---|---|---|---|
Summarizer | RG-1 | RG-2 | RG-3 | RG-L | RG-1 | RG-2 | RG-3 | RG-L | |
Abstractive or Mix |
Oracle | 58.15 | 34.16 | 24.11 | 52.99 | 57.78 | 30.43 | 18.41 | 51.24 |
Lead | 37.77 | 13.35 | 7.64 | 34.31 | 35.54 | 9.50 | 3.33 | 31.19 | |
Attn-Seq2Seq (Nallapati et al., 2016) | 31.55 | 8.52 | 7.05 | 27.38 | 29.30 | 6.00 | 1.77 | 25.56 | |
Pntr-Gen-Seq2Seq (See et al.) | 35.86 | 10.22 | 7.60 | 29.69 | 32.06 | 9.04 | 2.15 | 25.16 | |
Discourse summarizer (Cohan et al., 2018) | 38.93 | 15.37 | 9.97 | 35.21 | 35.80 | 11.05 | 3.62 | 31.80 | |
TLM-I+E (G,M) (Subramanian et al., 2019) | 42.13 | 16.27 | 8.82 | 39.21 | 41.62 | 14.69 | 6.16 | 38.03 | |
DANCER PEGASUS (SUS Gidiotis and Tsoumakas, 2020) | 46.34 | 19.97 | - | 42.42 | 45.01 | 17.60 | - | 40.56 | |
PEGASUS (Nallapati et al., 2016) | 45.97 | 20.15 | - | 28.25 | 44.21 | 16.95 | - | 25.67 | |
BigBird-Pegasus (Zaheer et al., 2020) | 46.32 | 20.65 | - | 42.33 | 46.63 | 19.02 | - | 41.77 | |
Extractive |
SumBasic (Vanderwende et al., 2007) | 37.15 | 11.36 | 5.42 | 33.43 | 29.47 | 6.95 | 2.36 | 26.30 |
LexRank (Erkan and Radev, 2004) | 39.19 | 13.89 | 7.27 | 34.59 | 33.85 | 10.73 | 4.54 | 28.99 | |
LSA (Steinberger and Jezek, 2004) | 33.89 | 9.93 | 5.04 | 29.70 | 29.91 | 7.42 | 3.12 | 25.67 | |
Attn-Seq2Seq (Nallapati et al., 2016) | |||||||||
Sent-CLF (Subramanian et al., 2019) | 45.01 | 19.91 | 12.13 | 41.16 | 34.01 | 8.71 | 2.99 | 30.41 | |
Sent-PTR (Subramanian et al., 2019) | 43.30 | 17.92 | 10.67 | 39.47 | 42.32 | 15.63 | 7.49 | 38.06 | |
Bert Ranker (Nogueira and Cho, 2019) | 43.67 | 18.00 | 10.74 | 39.22 | 41.65 | 13.88 | 5.92 | 36.40 | |
BertSumExt (Liu and Lapata, 2019) | 41.09 | 15.51 | 8.64 | 36.85 | 41.24 | 13.01 | 5.26 | 36.10 | |
BertSumExt (SW) (Liu and Lapata, 2019) | 45.01 | 20.00 | 12.05 | 40.43 | 42.93 | 15.08 | 6.01 | 37.22 | |
Longformer-Ext (Beltagy et al., 2020) | 43.75 | 17.37 | 10.18 | 39.71 | 45.24 | 16.88 | 8.06 | 40.03 | |
Reformer-Ext (Kitaev et al., 2020) | 42.32 | 15.91 | 9.02 | 38.26 | 43.26 | 14.86 | 6.66 | 38.10 | |
GBT-ExtSum (Ours) | 46.87 | 20.19 | 12.11 | 42.68 | 48.08 | 19.21 | 9.58 | 42.68 |
As can be seen in Table 2, our model outperforms the other approaches on almost all metrics. This is particularly true for the arXiv dataset, where the documents tend to be longer than for PubMed. Sent-CLF, another hierarchical approach in the list of models tested, achieves the most comparable baseline to ours on the PubMed dataset, but performs comparatively poorly on the arXiv one. Overall, BertSumExt seems to be the approach closest to ours in terms of results and architecture. However, there remains a significant gap between the results from all models and the Oracle scores, which suggests that there is still room for improvement on this task. We provide a deeper analysis of all these results in the full version of our paper (6).
Figures 2 and 3 show two examples of extracted summaries produced by the models developed using our approach. Figure 2 shows an extracted summary for the paper ‘Attention Is All You Need’ (5) achieved with a model trained on arXiv papers. Additionally, we found that a model trained to summarize scientific papers is also able to produce relevant summaries in different contexts. In Figure 3, we present an example of a summary produced by a model trained on arXiv and tested on a collection of hotel reviews. We can see that, from a human perspective, the proposed summaries look coherent and provide meaningful information for the reader.
Our novel Transformer-based model for long-document summarization is based on coupled Transformer layers and propagation layers that encode and spread information between multiple blocks of a document. This model preserves the architecture of commonly used pretrained language models, thus enabling it to take advantage of associated parameters. An evaluation of our BERT-based model in the context of an extractive summarization task further revealed its effectiveness in dealing with long documents compared to other adaptations of BERT and previously proposed models.
Although we focused our evaluations on extractive summarization, we’re interested in testing our architecture on other tasks. In the future, we plan to adapt our model to more tasks that require comprehension of long documents, such as question answering, document-scale machine translation and information retrieval. We’d also like to further investigate how these models, which are trained to summarize scientific papers, can transfer to different corpora, such as hotel/restaurant reviews or news articles.
This work was done in collaboration with the University Grenoble Alpes and the MIAI Grenoble Alpes Institute
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.