NAVER is Korea’s premier internet company and a global leader in online services like NAVER search, LINE messaging and WEBTOON. NAVER invests over a quarter of revenue in R&D and, through advanced technology, is seamlessly connecting the physical and digital worlds. Its AI and Robotics research in Asia and Europe is fundamental to creating this future.
CLOVA AI Research is the team responsible for advanced and fundamental AI technologies based on machine learning and deep learning that enable the global NAVER and LINE AI platform (CLOVA) to be smarter. Computer vision, natural language processing, recommendation, and pattern recognition technologies are upgraded by establishing new models and investigating optimization methods suitable for existing models. Results are disseminated on the NAVER github site and in conference publications such as CVPR, ECCV, ICCV, NeurIPS, ICML, Interspeech, EMNLP…
NAVER LABS is an R&D subsidiary of NAVER responsible for future technology. Its world class researchers in Korea and Europe create new connections between people, machines, spaces and information by advancing technology in AI, robotics, autonomous driving, 3D/HD mapping and AR.
Congratulations to Florent Perronnin, NAVER LABS Europe, Jorge Sanchez National University of Cordoba & Thomas Mensink, University of Amsterdam for winning the Koenderink 10 year test of time award at ECCV 2020 for their 2010 paper ‘Improving the Fisher kernel for large-scale image classification’ ! The award was announced in the opening ceremony of the conference Monday August 24th.
NAVER LABS Europe came 2nd in the Long-Term Visual Localization Challenge in Autonomous Vehicles!
NAVER is presenting 9 papers at ECCV as well as giving tutorials and organising workshops. In this live session, you’ll get a brief overview of relevant R&D activities and career opportunities at NAVER, Korea’s no.1 Internet portal and leading global content services provider. Leaders from NAVER LABS research in Korea and Europe and the AI division ‘Clova AI’ will present current work in computer vision, machine learning, AR, AI and Robotics followed by a Q&A.
Naila Murray
Director of Science, NAVER LABS Europe. Princeton University and Universitat Autonoma de Barcelona & CVC Barcelona. Previously head of computer vision research at NAVER LABS Europe. Research interests include representation learning and multi-modal search. AC at CVPR 2020, PC for ICLR 2021.
Jongyoon Peck
Executive Officer and Head of Autonomous Driving at NAVER LABS Korea, Member of the Korean Presidential Committee on 4th Ind. Revolution, Previously Samsung Techwin, Stanford University and Seoul National University.
Jung-Woo Ha
Executive Officer and Head of Research at Clova AI. Clova is a general-purpose AI assistance platform developed by NAVER and LINE. PhD in machine learning and AI from Seoul National University.
NAVER LABS Europe is supporting the Visual Domain Adaptation Challenge (VisDA-2020) and the TASK-CV 2020, Robust Vision Challenge, RCV 2020 and giving a tutorial on Domain Adaptation for Visual Applications.
24th August 06:00 and 14:00 #1014
Learning Visual Representations with Caption Annotations [supplementary material]
Bülent Sariyildiz, Julien Perez, Diane Larlus
25th August 14:00 and 20:00 #2074
ReAD: Reciprocal Attention Discriminator for Image-to-Video Re-Identification [supplementary material]
Minho Shim, Hsuan-I Ho, Jinhyung Kim, Dongyoon Wee.
25th August 14:00 and 20:00 #2241
Character Region Attention For Text Spotting
Youngmin Baek, Seung Shin, Jeonghun Baek, Sungrae Park, JunyeopLee, Daehyun Nam, Hwalsuk Lee
25th August 14:00 and 26th August 00:00 #2360
Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction [supplementary material]
Anil Armagan, Philippe Weinzaepfel, Romain Bregier, Gregory Rogez et al.
26th August 06:00 and 14:00 #3214
DOPE: Distillation Of Part Experts for whole-body 3D pose estimation in the wild [Blog]
Philippe Weinzaepfel, Romain Brégier, Hadrien Combaluzier, Vincent Leroy, Gregory Rogez
26th August 22:00 and 27th August 08:00 #3241
BSL-1K: Scaling up co-articulated sign recognition using mouthing cues [supplementary material]
Samuel Albanie, Gul Varol, Liliane Momeni, Triantafyllos Afouras, Joon Son Chung, Neil Fox, Andrew Zisserman
27th August 14:00 and 20:00 #4137
Few-shot Compositional Font Generation with Dual Memory. [supplementary material]
Junbum Cha, Sanghyuk Chun, Gayoung Lee, Bado Lee, Seonghyeon Kim, Hwalsuk Lee
27th August 14:00 and 28th August 00:00 #4335
Learning to Generate Grounded Visual Captions without Localization Supervision [supplementary material]
Chih-Yao Ma, Yannis Kalantidis, Ghassan AlRegib, Peter Vajda, Marcus Rohrbach, Zsolt Kira
27th August 22:00 and 28th August 08:00 #4330
Self-supervised learning of audio-visual objects from video [supplementary material]
Triantafyllos Afouras, Andrew Owens, Joon Son Chung, Andrew Zisserman
Workshop: TASK-CV
Date: Sunday 23rd August 2020
Organisers: Tatiana Tommasi (Politecnico di Torino, Italy), Antonio M. Lopez (CVC & UAB, Spain), David Vazquez (Element AI, Canada), Gabriela Csurka (NAVER LABS Europe, France), Kate Saenko (Boston University, USA), Liang Zheng (Australian National University, Australia), Xingchao Peng (Boston University, USA), Weijian Deng (Australian National University, Australia)
NAVER LABS Europe is supporting the best paper award at TASK-CV 2020.
Workshop: Women in Computer Vision (WiCV)
Date: Sunday 23rd August, 10pm (UTC+1)
Diane Larlus, Principal Scientist, mentor at the mentoring dinner
Workshop: Long-Term Visual Localization
Date: Friday 28th August 2020
Challenge: NAVER LABS Europe 2nd place in Autonomous Vehicle Challenge: Martin Humenberger, Yohann Cabon, Nicolas Guerin, Julien Morat, Jérôme Revaud, Philippe Rerole, Noe ́ Pion, Cesar de Souza, Gabriela Csurka, Robust Image Retrieval-based Visual Localization using Kapture.
All datasets for this challenge were provided by NAVER LABS Europe’ in the kapture format, a data format and tool set for facilitating the integration of various datasets within visual localization processing environments.
More details on can be found on the kapture website.
Code, specifications, and documentation are on GitHub.
NAVER LABS Europe is a sponsor of the Long-Term Visual Localization Workshop.
Tutorial: Weakly supervised learning in computer vision
Date: Friday 28th August, 9am and 5pm (UTC+1)
Speakers: Haken Bilen, University of Edinburgh, Rodrigo Benenson, Google and Seong Joon Oh, NAVER
Workshop: Instance-Level Recognition Workshop
Date: Friday 28th August, 10pm (UTC+1)
Invited talk: Diane Larlus, Principal Scientist, “From Instance-Level to Semantic Image Retrieval”
NAVER was recognised as a top employer and company university students would like to work for in South Korea for 3 consecutive years (2016 – 2019)
Diversity is the reason NAVER came into existence in 1999. The need to provide alternatives is a fundamental core value for a healthy society.
We value different ways of thinking about the world and different perceptions of the world.
We try to create an inclusive workplace where respect reigns. A place where everyone can be themselves.
NAVER was recognised as a top employer and company university students would like to work for in South Korea for 3 consecutive years
(2016 – 2019)
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.