Philippe Weinzaepfel |
2020 |
Estimating human poses from photographs and videos represents a major challenge in computer vision. The ability to recognize detailed 3D poses—defined by the location of a set of predefined ‘key points’, such as body joints—could enable avatar animation to be simplified, for example, or make human interaction with virtual content feel more natural. Pose recognition is also useful for teaching robots to perform tasks via human demonstration, as it makes their implementation far more intuitive and adaptive (1).
So far, pose recognition has been tackled by looking at specific body parts, such as the limbs (position of the arms and legs) (2, 3), hands (position of the fingers) (4) or face (position of eyes, nose, eyebrows, mouth and face contour) (5). To achieve a global understanding of human poses, though, one must capture information about all body parts at once. Animating an avatar in a realistic fashion, for example, requires the capture not only of limb position but also of facial expression and hand gesture, as finger poses and facial movements carry a great deal of non-verbal communication. Moreover, teaching a robot arm how to perform a task goes beyond the position of the arm to the hand; nuanced information about finger movement would be required to enable manipulation of an object, for example.
We’re proposing the first learning-based framework that can detect people in an image and estimate their whole-body pose (including body, face and hands) in both 2D and 3D, as shown in Figure 1 (6). Importantly, our aim is to tackle this problem in the wild, which means that our method must be robust to occlusion (where the object being tracked is partially obscured by another object or another person) and truncation at the image boundary (where the object is cut off at the corner or edge of the image). Our framework should also be able to accurately estimate poses in images that depict multiple people interacting either among themselves or with objects in the scene.
Body-part experts for in-the-wild pose recognition
The main difficulty in estimating poses arises due to the lack of in-the-wild training data with whole-body 3D annotations. The datasets currently available are generally captured in controlled environments, such as ‘mocap’ rooms. A model trained on this kind of data—with a single fully visible person and a fixed background—would not be generalizable to real-world images, due to the more complex interplay between people and objects. Although in-the-wild datasets are available for the sub-problems of body-, hand- and face-pose estimation, annotations are only provided for the key points of interest in each set. This means that while one dataset could include, for example, hand-pose annotations, information about facial features and the pose of other parts of the body would be missing.
We propose leveraging these datasets to train a single network for whole-body pose estimation using distillation. More precisely, we use distillation to transfer the knowledge of several body-part experts into a unified network that outputs a more complete representation of the whole human body. An overview of our training framework is shown in Figure 2. Given a training image, a whole-body network requires ground-truth annotations—i.e. information about the exact 2D and 3D locations of every joint for each part—for bodies, hands and faces observed in the image. As we don’t have ground-truth data, we propose instead using an expert (a network specialized for 2D/3D pose estimation of a given body part) for each. In the example shown in Figure 2, we run a body expert, a hand expert and a face expert to obtain detections and 2D/3D poses for each. We then combine the estimations of the three experts to obtain detections for whole bodies that can then be used as pseudo-ground-truth annotations to train our whole-body network. Note that because we assume that these experts are already trained on dedicated datasets, they are frozen (i.e. do not change) during training of the whole-body network.
We also employ a distillation loss, which ensures the network makes predictions that are as close as possible to those made by the experts. Since we are distilling the knowledge of each part expert into a single network for whole-body pose estimation, we call our method ‘distillation of part experts’, or DOPE.
DOPE uses a detection architecture where bodies, hands and faces are the objects detected in each image. To achieve this, we extend LCR-Net++ (2), a recently developed body-pose-detecting architecture that’s robust in a variety of challenging real-world scenarios (an overview is shown in Figure 3). As well as estimating body poses, LCR-Net++ has been adapted (4) to address the challenge of hand-pose estimation. Here, the anchor poses represent a set of particular hand poses and the regression is applied to the hand key points. Additionally, we adapted LCR-Net++ to tackle face-pose detection with facial features as key points. The original LCR-Net++ architecture and the adapted versions for hands and faces comprised the three part experts that we required for our architecture.
With a given image, LCR-Net++ extracts convolutional features and feeds them into a region proposal network (RPN) to generate candidate boxes that contain potential body instances. After the features of the image are pooled according to these boxes, additional convolutions are applied to separate the data into two branches. Of these, the classification branch detects the most similar pose from a discrete set of predefined anchor poses. Then, the regression branch applies an anchor-pose-specific regression to refine the predicted pose in both 2D and 3D.
For our DOPE network architecture (see Figure 4), we combined LCR-Net++ for bodies, hands and faces. The convolutional features are fed from the candidate boxes into the classification and regression branches (six in total; one classification and one regression branch per part). DOPE combines the loss of the RPN as well as the loss for the classification and regression of each body part. Additionally, we use a distillation loss to further enforce imitation of the part experts by penalizing differences between the outputs from our DOPE network and the outputs from the teacher networks.
Table 1 compares the performance of DOPE with that of the three experts (i.e. for body, hand and face) on a variety of datasets. To evaluate the body-pose estimation of the experts in 2D and 3D, we used the MPII and MuPoTs datasets, respectively. We evaluated hand-pose estimation in 3D on the RenderedHand dataset, and face landmark detection on the Menpo dataset. Our results show that DOPE performs on par with the experts and carries out multiple tasks with a network capacity comparable to that of a single expert. By comparing DOPE to a baseline (i.e. where unannotated parts are simply ignored during training), we observe a decrease of performance, especially for hands and faces. This performance decrease occurs because unannotated parts are considered as background during the training of our detection framework.
Video 1: A live demo of DOPE
Footage from a live demo of our approach (Video 1) shows that DOPE is fast even when performing in real time on relatively inexpensive equipment (a laptop with a GTX 1080 graphics card). For clarity, we only show the 2D poses, but 3D poses are also estimated. Our model detects multiple people in the scene and is robust to challenging conditions (such as truncation and occlusion).
DOPE is the first learning-based method to detect and estimate whole-body 3D human poses in the wild, and includes 2D and 3D key points for the body, hands and face. We successfully overcame the lack of training data that is available for this task by leveraging distillation from part experts to our whole-body network. We will present DOPE at the (virtual) European Conference on Computer Vision at the end of August. Code and models are available here.
In our future work, we intend to investigate whether using diverse unlabelled data can further improve the generalizability of our model. We also hope to improve the performance of the experts under challenging conditions—such as interactions with objects—for wider applicability. Finally, we plan to develop applications around DOPE (e.g. interaction with virtual avatars and robots).
[1] Visual Recognition of Grasps for Human-to-Robot Mapping. Hedvig Kjellström, Javier Romero and Danica Kragic. International Conference on Intelligent Robots and Systems (IROS 2008), Nice, France, 22–26 September 2008. DOI: 10.1109/IROS.2008.4650917.
[2] LCR-Net++: Multi-person 2D and 3D Pose Detection in Natural Images. Gregory Rogez, Philippe Weinzaepfel and Cordelia Schmid. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 5, 2019, pp. 1145–1161.
[3] VNect: Real-Time 3D Human Pose Estimation with a Single RGB Camera. Dushyant Mehta, Srinath Sridhar, Oleksandr Sotnychenko, Helge Rhodin, Mohammad Shafiei, Hans-Peter Seidel, Weipeng Xu, Dan Casas and Christian Theobalt. ACM Transactions on Graphics, vol. 36, no. 4, 2017, pp. 1–14.
[4] Measuring Generalisation to Unseen Viewpoints, Articulations, Shapes and Objects for 3D Hand Pose Estimation under Hand-Object Interaction. Anil Armagan, Guillermo Garcia-Hernando, Seungryul Baek, Shreyas Hampali, Mahdi Rad, Zhaohui Zhang, Shipeng Xie et al. 16th European Conference on Computer Vision (ECCV 2020), 23–28 August 2020.
[5] The Menpo Benchmark for Multi-pose 2D and 3D Facial Landmark Localisation and Tracking. Jiankang Deng, Anastasios Roussos, Grigorios Chrysos, Evangelos Ververas, Irene Kotsia, Jie Shen and Stefanos Zafeiriou. International Journal of Computer Vision, vol. 127, 2019, pp. 599–624. DOI: 10.1007/s11263-018-1134-y.
[6] DOPE: Distillation of Part Experts for Whole-Body 3D Pose Estimation in the Wild. Philippe Weinzaepfel, Romain Brégier, Hadrien Combaluzier, Vincent Leroy and Gregory Rogez. 16th European Conference on Computer Vision (ECCV 2020), 23–28 August 2020.
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimization problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimization to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments. More details on our research can be found in the Explore section below.
For a robot to be useful it must be able to represent its knowledge of the world, share what it learns and interact with other agents, in particular humans. Our research combines expertise in human-robot interaction, natural language processing, speech, information retrieval, data management and low code/no code programming to build AI components that will help next-generation robots perform complex real-world tasks. These components will help robots interact safely with humans and their physical environment, other robots and systems, represent and update their world knowledge and share it with the rest of the fleet. More details on our research can be found in the Explore section below.
Visual perception is a necessary part of any intelligent system that is meant to interact with the world. Robots need to perceive the structure, the objects, and people in their environment to better understand the world and perform the tasks they are assigned. Our research combines expertise in visual representation learning, self-supervised learning and human behaviour understanding to build AI components that help robots understand and navigate in their 3D environment, detect and interact with surrounding objects and people and continuously adapt themselves when deployed in new environments. More details on our research can be found in the Explore section below.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
—————
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
Details on the gender equality index score 2024 (related to year 2023) for NAVER France of 87/100.
1. Difference in female/male salary: 34/40 points
2. Difference in salary increases female/male: 35/35 points
3. Salary increases upon return from maternity leave: Non calculable
4. Number of employees in under-represented gender in 10 highest salaries: 5/10 points
The NAVER France targets set in 2022 (Indicator n°1: +2 points in 2024 and Indicator n°4: +5 points in 2025) have been achieved.
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2024 au titre des données 2023 : 87/100
Détail des indicateurs :
1. Les écarts de salaire entre les femmes et les hommes: 34 sur 40 points
2. Les écarts des augmentations individuelles entre les femmes et les hommes : 35 sur 35 points
3. Toutes les salariées augmentées revenant de congé maternité : Incalculable
4. Le nombre de salarié du sexe sous-représenté parmi les 10 plus hautes rémunérations : 5 sur 10 points
Les objectifs de progression de l’Index définis en 2022 (Indicateur n°1 : +2 points en 2024 et Indicateur n°4 : +5 points en 2025) ont été atteints.
To make robots autonomous in real-world everyday spaces, they should be able to learn from their interactions within these spaces, how to best execute tasks specified by non-expert users in a safe and reliable way. To do so requires sequential decision-making skills that combine machine learning, adaptive planning and control in uncertain environments as well as solving hard combinatorial optimisation problems. Our research combines expertise in reinforcement learning, computer vision, robotic control, sim2real transfer, large multimodal foundation models and neural combinatorial optimisation to build AI-based architectures and algorithms to improve robot autonomy and robustness when completing everyday complex tasks in constantly changing environments.
The research we conduct on expressive visual representations is applicable to visual search, object detection, image classification and the automatic extraction of 3D human poses and shapes that can be used for human behavior understanding and prediction, human-robot interaction or even avatar animation. We also extract 3D information from images that can be used for intelligent robot navigation, augmented reality and the 3D reconstruction of objects, buildings or even entire cities.
Our work covers the spectrum from unsupervised to supervised approaches, and from very deep architectures to very compact ones. We’re excited about the promise of big data to bring big performance gains to our algorithms but also passionate about the challenge of working in data-scarce and low-power scenarios.
Furthermore, we believe that a modern computer vision system needs to be able to continuously adapt itself to its environment and to improve itself via lifelong learning. Our driving goal is to use our research to deliver embodied intelligence to our users in robotics, autonomous driving, via phone cameras and any other visual means to reach people wherever they may be.
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.