This user has not added any information to their profile yet.
I am a Senior Research Scientist at NAVER LABS Europe in the Computer Vision group. My research is focused on computer vision and machine learning, and in particular on human understanding tasks (human pose estimation, action recognition, etc.) and representation learning (self-supervised learning, image classification, image retrieval).
Please check my google scholar profile.
I graduated from ENS Cachan and obtained an MSc degree from Université Joseph Fourier (France) in 2012. I then worked as a doctoral candidate in the lear/thoth teams at Inria Grenoble and received my PhD from Université Grenoble Alpes in 2016, which I did under the supervision of Zaid Harchaoui and Cordelia Schmid. I joined the Xerox Research Centre Europe in 2016, which was acquired by Naver Labs in 2017.
- September 2022. One paper accepted at NeurIPS'22 on self-supervised learning for geometric tasks by cross-view completion.
- July 2022. Two papers accepted at ECCV'22 on linking human poses and natural language (PoseScript) and on human motion generation and forecasting (PoseGPT)
- July 2022. One paper accepted at IROS'22 on Multi-Finger Grasping like Humans
- May 2022. One paper published at IJCV on benchmarking image retrieval for visual localization.
- March 2022. One paper accepted at CVPR'22 on unsupervised learning of local descriptors: PUMP.
- January 2022. One paper accepted at ICLR'22 on Learning Super-features for image retrieval, code is also released here.
- November 2021. One paper accepted at AAAI'22 on barely-supervised learning.
- October 2021. One paper accepted at 3DV'21 on leveraging MoCap data for human mesh recovery. We will also present a demo to animate a multi-finger gripper, see this blog post for more info.
- June 2021. Two papers this month: one at ICRA'21 on generating multi-finger grasps and another one at CVPR introducing novel large-scale indoor localization datasets.
- December 2020. Two papers at NeurIPS 2020. The first one, SuperLoss, is about an easy-to-use module to perform curriculum learning for any task which leads to robustness to noise in the training data. The second one, MoCHi, is about mixing negatives in the feature space for self-supervised contrastive learning.
- November 2020. One paper being presenting as on oral at 3DV 2020 on building a benchmark of 3D pose estimation in the wild by leveraging videos of static humans.
- August 2020. Two papers at ECCV 2020. The first one is entitled DOPE and is about whole-body 3D human pose estimation in the wild (whole-body = body + hand + face). Test code is available online. The second one summarizes the main results on the hand pose estimation challenge that took place at ICCV 2019.
- December 2019. New paper on arXiv on benchmarking action recognition methods trained on Kinetics on mimed actions. Mimetics dataset released. Now published at IJCV.
- December 2019. R2D2 training and testing code is now online.
- September 2019. Our R2D2 paper has been accepted as an oral at NeurIPS 2019! Preliminary version of the paper available on arXiv.
- June 2019. We won the local feature track of long-term visualization challenge: our method named R2D2 will be presented during the CVPR'19 workshop.
- June 2019. Two papers to be presented at CVPR 2019. The first one on visual localization from objects-of-interest (see the blog here and the VirtualGallery dataset there). The second one on action recognition (see the blog here, and the code is available there).
- January 2019. LCR-Net++ has now appeared in PAMI.
- September 2018. 1 demo accepted at ECCV 2018 on a real-time version of LCR-Net++ for real-time multi-person 2D/3D pose estimation in the wild from a single RGB image, with now ResNet backbone and pytorch code (available online) leading to better performance
- June 2018. 1 demo accepted at CVPR 2018 on a real-time version of LCR-Net
- March 2018. A newer and better version of LCR-Net called LCR-Net++ available on arXiv
- March 2018. 1 paper accepted at CVPR 2018 on action recognition from pose motion
- October 2017. Training and test code for the ACT-Detector ICCV'17 paper on action detection is now available
- July 2017. 2 papers accepted at ICCV 2017
- July 2017. test code for our LCR-Net CVPR'17 paper on joint 2D-3D human pose detection is now available.
- March 2017. 1 paper accepted at CVPR 2017