Q: What were your overall impressions of this year’s ICCV conference?
What struck me most was the amount of people. This year’s conference was the biggest ICCV ever, with 3100 attendees. There was half that number at the last one in 2015. It’s grown so much that there was actually an initiative by the PAMI Technical Committee to help manage the numbers. They proposed two motions which were to make the conference annual, instead of biannual, or to make it two track which means you have to make a choice of which orals you want to attend. The community voted for the latter which I’m personally happy about because it’s a solution that has proven to work pretty well in other conferences, like CVPR that has had double tracks since 1991.
Apart from the size, my other global impressions were that it was a very good conference, with great papers and good opportunities for networking and scientific discussions.
Q: What areas attracted the most attention this year?
A lot of good progress is still being made in standard recognition tasks. A good example of this is the COCO challenge where, since last year, detection, segmentation and keypoint localization also improved by about 10%.
The Mask R-CNN method, awarded best paper at the main conference, has a lot to do with this progress, although it was only shared on arXiv a few months ago. This shows the crazy pace of our field.
3D geometry is getting more attention these years, and a lot of what is happening there leverages recent advances in deep learning, proposing “geometry meets deep learning” types of approaches.
Q: Were there any surprises?
Everyone was hiring. Absolutely everyone was hiring and not just for one or two positions – for lots of positions (us included!) The sheer scale of it and the resources companies are putting into hiring is really quite something. Good Ph.D. students can pretty much take their pick from a bunch of earnest employers.
Q: What did you enjoy most at ICCV2017?
The scientific discussions at the posters. It is by far the best way to hold in-depth technical conversations with people who have authored work you like, and of course the opposite way round. People come to see you to learn more about what you’re doing. It’s unfortunate not to be able to do more of this but, because of the number of people there, you often have to queue to chat so it’s only possible to meet a few people at each session if you want to spend some real time with them.
Q: What was the overall feeling about NAVER LABS Europe there?
As we’re new to NAVER LABS but not new to the field, we spent a lot of time sharing what we’re doing in our new home. NAVER is a household name in Korea but is not so well known elsewhere. A lot of their recent investments in Europe as well as the acquisition of our organisation (XRCE) will make that change but we also have a lot to do as scientists in our communities so, you’ll be seeing us more and more at booths and social events as well as of course publishing in the main conferences. ICCV was a great place to start spreading the word.
Q: Do you have any recommendations you could give to young researchers aiming to publish in ICCV for the first time?
It’s such a competitive field that young researchers should most definitely not be discouraged if their paper is rejected. Secondly, when you do have something to publish, make sure you factor in sufficient time for the writing part. Even if you’re proposing an outstanding method, your paper will only be as good as what your community will get out of it. Put yourself in their shoes and your paper will only be better.
About the author: Diane is a senior scientist in the Computer Vision group at NAVER LABS Europe. As well her oral paper, Diane gave an invited talk at the VSM workshop. For details on all NAVER LABS papers and talks at ICCV2017 see this blog post
Our current job openings in computer vision are here
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.