A novel, plug and play model for human 3D shape estimation of the body or hands, in videos which is trained by mimicking the BERT algorithm from the natural language processing community.
A novel, plug and play model for human 3D shape estimation of the body or hands, in videos which is trained by mimicking the BERT algorithm from the natural language processing community.
Code: github/naver/posebert
PoseBERT [1] is a new algorithm that takes as input the 3D poses of a person estimated in each frame of a video i.e. the position of his/her body joints, and predicts a sequence of 3D shapes. Although the estimations may be noisy due to motion blur, occlusions or ambiguities, PoseBERT returns a smooth sequence of 3D shapes. PoseBERT can also be plugged on top of any state of the art pose estimation method such as SPIN [2], our DOPE model [3] or our new MoCap-SPIN model also presented in [1].
PoseBERT is inspired by the BERT algorithm from the natural language processing (NLP) community. BERT (which stands for Bidirectional Encoder Representations from Transformers), is a method proposed by researchers at Google AI Language in 2018 that has had very good results on a wide variety of NLP tasks such as Question Answering or natural language inference. In their paper [4], the researchers detail, among other elements, a technique named Masked Language Model for bidirectional training of their models. Before feeding sentences into BERT, a percentage of the words in each sequence are masked and the model is trained to predict these masked words, based on the context provided by the other, non-masked, words of the sequence. PoseBERT adapts this learning process to human 3D poses. We mask, or perturb with noise, a percentage of poses in a sequence and PoseBERT attempts to predict the missing or noisy poses by using the context provided by the valid, untouched poses.
We trained two versions of PoseBERT. One model for the body and another one for the hand. In practice, we rely on the SMPL parametric model [5] developed by researchers at the Max Planck Institute in Germany, and train PoseBERT to predict the parameters of this model and not the thousands of vertices of the human 3D mesh. PoseBERT can be trained on Motion Capture data only, without requiring image annotations.
When combined with MoCap-SPIN, PoseBERT reaches state-of-the-art performance for human 3D pose estimation in videos on several 3D pose estimation benchmarks. We also combined PoseBERT with DOPE [3] to estimate the 3D shape of a hand in real-time and used these predictions to animate an ALLEGRO robot hand. This fun demo is given live at the 3DV 2021 conference demo session. We’ll be pursuing this work on pose retargeting and robots manipulating objects like humans so stay tuned to our blog and publications.
Code: github/naver/posebert
Details on the gender equality index score 2023 (related to year 2022) for NAVER France of 81/100.
NAVER France targets are as follows:
——————-
Index NAVER France de l’égalité professionnelle entre les femmes et les hommes pour l’année 2023 au titre des données 2022 : 81/100
Détail des indicateurs :
Les objectifs de progression de NAVER France sont :
NAVER LABS Europe 6-8 chemin de Maupertuis 38240 Meylan France Contact
This web site uses cookies for the site search, to display videos and for aggregate site analytics.
Learn more about these cookies in our privacy notice.
You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.
FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.
AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.
Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.
This content is currently blocked. To view the content please either 'Accept social media cookies' or 'Accept all cookies'.
For more information on cookies see our privacy notice.