Papers and activities at this year’s conference.
Matthias Gallé, Hady Elsahar, Quentin Grail, Jos Rozen, Julien Perez |
2021 |
Papers and activities at this year’s conference.
The European Chapter of the ACL organizes one of the major Natural Language Processing events: EACL, which, with COVID – moved from Kyiv, Ukraine to being virtual and is happening this week.
NAVER LABS Europe is well represented, with several contributions highlighting our work in natural language generation. Modern deep learning networks have revolutionized that space by proposing new tools which go beyond the short context (the so-called markovian assumption) of previous models. They allow for richer data-driven representation by pre-training on large quantities of textual data and for more flexible interactions thanks to the so-called self-attention mechanism.
However, several challenges remain. This year, our research contributions focus on:
Current models did drop the markovian hypothesis, which assumed that only the previous few words are relevant in order to decide how a text should continue. In practice though the complexity of modern Transformer models grows quadratically with the length of the prefix that is considered, this renders them impractical for many applications that rely on longer context.
In Globalizing BERT-based transformer architectures for long document summarization, we develop an approach that allows extending a given Transformer-based language model to long documents. We propose a hierarchical approach that combines local and global encodings of a document. This architecture interweaves Transformer functions and propagation layers. The Transformers are in charge of encoding local contexts, typically sentences, while propagation layers spread the information across the document. These local and global representations of the document are fused at every layer of the model. Local Transformer functions are initialized from pre-trained language models, such as BERT/RoBERTa, thus benefiting from its additional knowledge. We demonstrate the effectiveness of the proposed architecture on a task of extractive summarization of scientific papers from arXiv and PubMed. You can read more in the accompanying blog post A scalable Transformer architecture for summarizing long documents.
Training those models to perform specific generation tasks, like translation, summaries or data-to-text productions (think weather forecast or sport-games summaries) in the standard frame-work of supervised learning is expensive. This is because obtaining annotated examples of those tasks involve a cognitive-heavy effort and the large variety of valid generations require a wide set of generations.
In Self-supervised and controlled multi-document opinion summarization we take a look at how a self-supervised approach can alleviate that problem. Self-supervision extracts a supervision signal from otherwise unlabelled data, to then use standard supervised frameworks with that signal. In that work we apply this idea to the problem of summarizing user generated reviews, the self-supervision consists in assuming that one review is the summary of a set of other reviews for the same product. We can then forgo gold summaries to train a model, and the vast quantity of existing reviews expose the model to many variations.
The capabilities of large neural networks is astonishing, but makes them often similar to a powerful and heavy robot: extremely useful when it correctly achieves a task, but hard to maneuver and nudge to certain directions. The field of controlled natural language generation is concerned with different ways of exercising some guidance on the produced text.
In the previous work applied to self-supervised opinion summarization we use so-called control tokens, control mechanisms in the textual form expected by those neural networks. The originality of our contribution is that those tokens are not fixed, but inferred from the original reviews that we wish to summarize. Our experiments show that by including them we obtain more on-topic summaries.
An alternative way of exercising more control is to fine-tune large pre-trained models on the desired tasks. This is however often considered expensive as updating those parameters might require a long time on powerful machines. In the demo Breaking Writer’s Block: Low-cost Fine-tuning of Natural Language Generation Models we show that this is indeed a valid alternative. With only $150 of cloud credits, a GPT-2 model is fine-tuned to perform a very different task: fill-in a missing paragraph, based on a number of facets: surrounding paragraphs, named-entities, genre of the text and a summary of the desired paragraph.
In addition to those research collaborations, NAVER LABS Europe is a sponsor and several members of its research staff are co-organizing the 2nd AfricaNLP Workshop at EACL, to strengthen African NLP, where we’re also presenting a paper on automatic speech recognition.
Details on the gender equality index score 2023 (related to year 2022) for NAVER France of 81/100.
NAVER France targets are as follows:
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2023 au titre des données 2022 : 81/100
Détail des indicateurs :
Les objectifs de progression de NAVER France sont :
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.