Many of us have freedom of speech and can easily publish our opinions online, but how many of us feel that our voice is heard? Unless you’re an influencer, the majority of blog posts are only read by a few friends, and voting in elections or giving star ratings to restaurants are not exactly rich in self-expression. This means there’s a fundamental asymmetry in a society where the opinions of only a few are heard by everyone, but where no one can possibly listen to everyone’s opinions.
But computers can. What’s to stop them from telling us what everyone out there is saying? What would it mean? Well, if a computer could automatically summarise everyone’s opinions and broadcast that summary to all, then each of us would be able to see that our opinion had been heard. This is something on the not so distant horizon that citizens will come to expect, as well as customers, employees and members of any kind of organisation.
At XRCE, we’re working on Artificial Intelligence software that can understand and summarise opinions. It includes specific technology for understanding sentiments and aspects of opinions, and generic deep learning architectures for understanding text. But the area where the most progress is needed is how to reduce the vast diversity of individual opinions into a comprehensible summary.
The core technology missing to be able to do this is ‘abstraction’. We need to be able to abstract away from the diverse details of individual opinions and find the consensus amongst large groups. For example, if one person says “Universal health care systems have lower long term health costs as they encourage patients to seek preventative care” and someone else says “Public health insurance is less costly than private insurance to the overall economy” we want to know they agree that “Public healthcare is less expensive”.
In natural language processing, abstraction is known as ‘textual entailment’. A statement ‘y’ entails a statement ‘x’ if ‘x’ is an abstraction of ‘y’, meaning all the information in ‘x’ is also in ‘y’. Any information in ‘y’ which is not in ‘x’ is the information being abstracted away. For example, “Public healthcare is less expensive” abstracts away from the claim that public healthcare encourages patients to seek preventative care.
Our recent work on entailment in text has focussed on abstraction in the meaning of words. The meaning of words is a core issue in natural language understanding because they are the fundamental building blocks of the meaning of text and because there are so many words and their meanings are so hard to define.
One very successful approach to the meaning of words has been distributional semantics. This hypothesises that you can infer the meaning of a word by looking at the distribution of other words which appear near it in a very large corpus of text. This distribution is then compressed into a vector of real numbers, such that words with similar distributions, and therefore similar meanings, also have similar vectors. These vectors have played a key role in the success of deep learning in language understanding.
In the European labs, we’ve developed a version of distributional semantics to model abstraction in the meaning of words[i]. We first developed a new deep learning architecture based on entailment between vectors, as opposed to similarity between vectors. We then used these entailment vectors to define a new model of distributional semantics, and trained new entailment vectors for words. The resulting model of abstraction in words gives the best published results on predicting hyponym-hypernym word pairs, such as “cat” (hyponym) entails “animal” (hypernym).
Standard deep learning architectures are based on the dot product between vectors, which measures the symmetric similarity of the two vectors. We propose vector operators which measure the asymmetric inclusion of one vector within another. This is derived from an interpretation of vectors in terms of things we know; if the things we know given vector ‘x’ are included in the things we know given vector ‘y’, then ‘y’ entails ‘x’, and the operator will give a high score. This framework also allows one vector to be calculated from other vectors which it entails or is entailed by. By specifying networks of vectors which entail each other and performing these vector calculations, we get a new form of deep learning architecture where the vectors and the structure of the model have a clear interpretation in terms of entailment.
The intuition behind our distributional semantic model is that words which occur together in a text should (on average) be consistent and redundant with each other. For example, in “furry cat”, there is nothing inconsistent between something being both furry and a cat, and both these concepts share the property of having fur. We model this with a new vector ‘y’ which is the unification of the vectors for the two words, meaning that, in this example, ‘y’ must entail both “furry” and “cat”. If these entailments have good scores and ‘y’ has a high probability, then the words are consistent and redundant. By training the word vectors so that this score reflects the distributions of word co-occurrences in a very large corpus of text, we get vectors which reflect what we know given a word.
To evaluate these vectors, we used data on abstraction relationships between words, called hyponymy. A hyponym-hypernym pair, like the “cat-animal” example given above, is one where the hypernym is entailed by the hyponym. In this example, knowing something is a cat includes knowing it is an animal. Just with the original vectors learned on text, we reach 70% accuracy on classifying word pairs for hyponymy, and if we train a mapping to select the features of the vectors which are relevant to hyponymy, then we reach 86% accuracy on new words. This beats any other published results so far.
Right now, we’re extending this model of entailment between words to a deep-learning model of entailment between sentences, which we expect to have within the year. That’s the critical piece of Artificial Intelligence technology we need to then model abstraction in opinions, and to compress very large numbers of opinions into short comprehensible summaries. Check the XRCE blog for updates on our progress. Once we have this summarisation technology, we’re optimistic that everyone’s right to be heard will quickly follow.
Author of this article: James Henderson who was part of NAVER LABS Europe until September 2017.
——————————–
[i] Henderson, J. and Popa, D., 2016 “A Vector Space for Distributional Semantics for Entailment’, Proceedings of the 54th Annual Meeting of the Association of Computational Linguistics, pp 2052-2062, Berlin, Germany, August 7-12, 2016. [PDF]
Listen to the author on how social mass communication will change society:
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.