Mert Bulent Sariyildiz, Julien Perez, Diane Larlus |
2020 |
Computer vision has come a long way in recent years. Indeed, computers are now better than humans at performing some visual tasks (such as lip reading and certain categorization) (1, 2, 3) due to advances in machine learning. Many computer vision tasks rely on strong visual features, however, and extremely large datasets have traditionally been required to obtain such visual representations. Additionally, for every new task, new models would typically need to be taught from scratch. To simplify the process and reduce the cost of developing new computer vision applications, it’s become standard to pre-train convolutional neural networks (CNNs) on a proxy task to create powerful generic visual representations, which are then ready to be reused by a new task at hand.
There are two main ways to achieve this. The most well-established approach relies on large collections of images that are annotated with fine-grained category labels. A CNN that has been trained by learning to predict these labels makes for a great visual feature extractor. For natural images, the go-to dataset is the 1.3-million-large ImageNet, where every image has been manually assigned a category label from one thousand possible ones. An obvious drawback of training a CNN using ImageNet is that it assumes a million-scale dataset with expert-level annotations will be readily available for the target task domain, which is often not the case. A large body of work in computer vision is moving away from this unrealistic assumption by requiring no label at all. So-called self-supervised approaches automatically fabricate labels from the dataset itself, and training for these ‘fake’ prediction tasks leads to similarly generic and transferable visual features, closing the gap with features trained using manual label annotations.
Some self-supervised approaches leverage the nicely balanced structure of ImageNet in that, although they do not look at the labels themselves, they instead make assumptions on the image statistics and image-label distributions (4, 5). The other type moves away from clean and curated datasets and obtains competitive results when given access to larger sets (typically hundreds of millions of images) without labels (6).
Whether annotated by experts or not, millions of images are a lot to ask for. In our research, we’re taking a different route: we’d like to use fewer images and to rely on annotations that can be made by non-experts, or even gathered automatically. In our approach, which is considered weakly-supervised, we recognize that images with companion text (i.e. captioned images) represent a rich source of annotations (7). Image captions are relatively easily produced by humans and can also be automatically mined and cleaned for certain domains.
Once we’ve obtained a set of captioned images, what should we do with them? A naïve approach would be to extract all of the objects named in the captions to create category labels (as in ImageNet). This is clearly a suboptimal solution, however, as captions are much richer than this: in addition to objects that appear in the scene, they often describe their characteristics; the actions that are taking place; information about the surroundings; and so on. Even subtle details contained in the caption could help create more useful representations! So how can we better leverage all this information?
We were inspired by recent advances in the neighbouring research field of natural language processing (NLP). NLP has undergone great progress as a result of the use of large corpora of text for training language models, via a task known as masked language modeling (MLM), (8) which requires no annotation. In this ‘fill-in-the-blank’ task, a randomly selected word from a sentence is hidden. By learning to predict the original value of this token—based on the context provided by the other, non-masked words—powerful language models can be created. In NLP, these models have been successfully used for tasks as diverse as sentence classification and question answering.
What if, in addition to the sentences from these corpora, we also had access to the images associated with them? With this information, we could use the text’s companion image to predict the masked token in a sentence (if it referenced something visible in the image). This makes sense not only for objects that are visible in the scene, but also for visual attributes (like their colour or size) and even for actions. With this in mind, we define a new proxy task, called image-conditioned masked language modeling (ICMLM). Training for this task should create visual representations that are good at filling in the blanks in a caption and are also ready to be applied to multiple new tasks.
With all this in place, we require a CNN architecture that is capable of solving the ICMLM task. This NN must fulfil a few requirements: it needs to encode both modalities (images and text); it must align their representations with the semantic concepts described in the caption; and it should properly localize which part of the image the visual features must focus on in order to guess the masked token. We came up with two architectures that fit these three requirements.
The first architecture, ICMLM-tfm, is based on the transformer architecture, which has proven very successful in NLP (9) and consists of a set of encoding and decoding layers (blocks). ICMLM-tfm predicts the masked token by fusing visual and textual information with several transformer blocks. This means that both the image and the other tokens in the caption are used to predict the masked token.
The second architecture, ICMLM-attfc, is based on a multi-modal attention network, which assigns weights to certain kinds of visual information to ascertain which area in an image to focus on. ICMLM-attfc predicts the masked token’s value by relying only on visual information, with textual information used to focus on the relevant image region(s) (i.e. the one expected to correspond to the masked token).
To evaluate both the proposed proxy task (ICMLM) and the ability of our two architectures (ICMLM-tfm and ICMLM-attfc) to solve it, we trained models using the MS COCO and Visual Genome (VG) datasets. These are captioned sets of images that are ten times smaller than ImageNet. We compared the visual features obtained by ICMLM with those produced by the supervised approach (on ImageNet) and self-supervised approaches. Table 1 shows that for two different CNNs—VGG16 and ResNet50—our architectures achieve competitive results with a fraction of the training-set size.
In addition to these quantitative results, we observe that our visual features seem to be interpretable. This is shown by the attention maps produced by the ICMLM models when trying to predict masked words (see Figure 3).
We’ve developed an alternative approach to creating generic visual representations that can generalize to many computer vision tasks. By considering images with companion captions, we’re able to trade large collections of images—with or without expert-level annotations—with a set of images that is ten times smaller. Our models build visual representations that transfer well to other computer-vision tasks by learning to guess masked words in captions. The attention maps produced by our models suggest that the visual representations we obtain could be used not only for recognizing visual concepts (such as the presence of a particular object), but also for localizing them.
The project web page: Learning visual representations with caption annotations
Publication entry: https://europe.naverlabs.com/research/publications/learning-visual-representations-caption-annotations/
Arxiv entry: https://arxiv.org/abs/2008.01392
[1] Lip Reading Sentences in the Wild. Joon Son Chung, Andrew Senior, Oriol Vinyals and Andrew Zisserman. Computer Vision and Pattern Recognition, arXiv: 1611.05358 [cs.CV], 2017.
[2] The Caltech-UCSD Birds-200-2011 Dataset. Catherine Wah, Steve Branson, Peter Welinder, Pietro Perona and Serge Belongie. California Institute of Technology, Pasadena, CA, USA, Computation and Neural Systems Technical Report, CNS-TR-2011-001, 2011.
[3] ImageNet: A Large-Scale Hierarchical Image Database. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li and Li Fei-Fei. IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2009), 20–25 June 2009, Miami, FL, USA. DOI: 10.1109/CVPR.2009.5206848.
[4] Unsupervised Representation Learning by Predicting Image Rotations. Spyros Gidaris, Praveer Singh and Nikos Komodakis. Computer Vision and Pattern Recognition, arXiv: 1803.07728v1 [cs.CV], 2018.
[5] Momentum Contrast for Unsupervised Visual Representation Learning. Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie and Ross Girshick. Computer Vision and Pattern Recognition, arXiv: 1911.05733v3 [cs.CV], 2020.
[6] Unsupervised Pre-training of Image Features on Non-curated Data. Mathilde Caron, Piotr Bojanowski, Julien Mairal and Armand Joulin. Computer Vision and Pattern Recognition, arXiv: 1905.01278v3 [cs.CV], 2019.
[7] Learning Visual Representations with Caption Annotations. Mert Bulent Sariyildiz, Julien Perez and Diane Larlus. European Conference on Computer Vision (ECCV 2020), 23–28 August 2020, Glasgow, UK (virtual event).
[8] BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee and Kristina Toutanova. Computation and Language, arXiv:1810.04805v2, 2019.
[9] Attention Is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser and Illia Polosukhin. Advances in Neural Information Processing Systems 30 (NIPS 2017), 4–9 December 2017, Long Beach, CA, USA.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.