We love visual content. While the power of words is limited by language and cultural barriers, pictures and videos are a universal communication media which transcends such differences.
04 October 2013
With the availability of cheap cameras and camera phones, it is not surprising to see visual content overflowing the Internet. As of today, Facebook is estimated to contain more than 100 billion images while on YouTube tens of hours of videos are uploaded every single minute. Yet, this wealth of media and information has little value per se if it cannot be easily accessed.
The most natural approach to search for content on the Internet is by issuing text queries on search engines such as Google, Yahoo or Bing. This is because, by far, our preferred mode of interaction with a computer is by using its keyboard. Such interaction makes perfect sense when looking for text. For instance, you will have no problem retrieving the full lyrics of a song from the Internet even if you know only a few words. But there is a huge gap between pictures and words – generally referred to as the semantic gap – which makes the task of matching words and pictures look almost impossible. This is why we tag our pictures and videos and why we give them descriptive names: to be able to retrieve those using text queries. However, hand-tagging visual content is a slow and tedious process and only a fraction of the visual content on the Internet is tagged. The remainder – which is sometimes referred to as the deep web – is inaccessible. We even sometimes have trouble finding pictures in our own personal photo-collection.
What if we could give computers the “gift of sight”? What if we could teach computers to translate our textual queries into “visual queries” so that words can be matched to visual content? But computers see pictures only as a multitude of small colorful dots standing next to each other, i.e. as a matrix of pixels. Why would these colourful dots have any meaning for a machine? The challenge is to link low-level information (the pixels) with high-level concepts like objects and scenes. This is the difficult question that computer vision research tries to address.
Computer vision is the research field that consists of designing computer programs that help machines understand visual data, for instance, programs that name the objects in a picture, or a video clip. In other words, computer vision scientists teach computers to “see”. They “show” images of different objects to a computer telling them what they are so that it can be trained to recognize these objects – in the same manner one shows images to a small child, often by pointing at objects and naming them.
While the task of interpreting a scene and its objects is trivial for humans – even for small children –teaching computers to see has proven to be a very arduous task for computer vision researchers. To bridge the semantic gap between the low level pixels and the high-level concepts, it is necessary to introduce intermediate representations. Consequently, the first programs in automatic image understanding have tried to break down the problem of recognizing objects into the problem of recognizing small object pieces. Early methods proposed to decompose objects into geometric components such as cylinders, bricks, wedges or circles. For instance an ice cream cone is composed of a sphere located above a cone. Although intuitively appealing, such methods obtained moderate success as the problem of recognizing object parts can be as difficult as recognizing the whole object.
As mentioned earlier, pictures are a universal communications medium that is more powerful but more complex than text. As a document is a series of words and words are well defined entities, we can count how often each word appears in a document and represent this document as the number of occurrences of each word. This simple representation – known as the bag-of-words – is very powerful and is at the heart of every modern text search engine. It allows us to classify documents using the presence (or absence) of certain words, as this is a strong indicator of the topic of a document. For instance, words such as “score”, “ball” or “league” are strongly indicative of a sports theme.
What if we could define entities made up of pixels such as “visual words”, so that similar representations could be used to describe images? This is exactly what Xerox scientists managed to achieve and, in the process, they revolutionized the field of computer vision. By analogizing visual content with textual content they introduced the concept of a “visual vocabulary” as an intermediate representation and used this representation to recognize semantic concepts such as objects. The issue with an image is that there is no obvious way to split it into a set of words. So the researchers proposed the following. Images would first be split into small image patches. These patches would then be grouped, using learning algorithms, into visually consistent groups where each group can be understood as a visual word. Each piece of an image can be mapped to one of these visual words. This visual vocabulary is very simple to learn and yet offers a higher degree of abstraction with respect to the images themselves. Xerox researchers then “showed” computers bags-of-visual-word representations that correspond to different objects to train them to recognize them.
The paper that describes this idea (“Visual categorization with bags of keypoints” [1]) had a tremendous impact and created a paradigm shift in the computer vision research community. Researchers moved away from small scale problems studied in laboratory settings, and started to address much larger scale realistic problems. The “bag-of-visual-words” model is now a de facto standard in the research community. Almost 10 years after publication, it remains one of the most cited articles in computer vision [2]. The vast majority of algorithms proposed since then build on the same seminal “visual vocabulary” idea.
This technology has been applied to many problems of high practical value. In Xerox, it has been used in applicative scenarios as varied as document routing in scanning workflows, vehicle recognition in surveillance videos, product recognition in retail businesses or image aesthetic analysis in communication and marketing.
To get a better feeling of what Xerox visual search engines can do today, see the categorization demo on Open Xerox.
About the authors:
Diane Larlus is research scientist in the computer vision group at Xerox Research Centre Europe. Her main interests are object recognition and localization, and more generally machine learning applied to computer vision.
Florent Perronnin is principal scientist and manager of the computer vision team at Xerox Research Centre Europe. His main interests lie in the application of machine learning to computer vision tasks such as image classification, retrieval or segmentation.
[1] “Visual categorization with bags of keypoints” by Gabriela Csurka, Chris Dance, Lixin Fan, Jutta Willamowski and Cédric Bray published in 2004 at the Workshop on Statistical Learning in Computer Vision of the European Conference in Computer Vision
[2] more than 1,900 as of today according to Google Scholar
[/vc_column_text][/vc_column][/vc_row]
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.