Philippe Weinzaepfel | Naver Labs Europe
Change your cover photo
Change your cover photo
blank
This user account status is Approved

This user has not added any information to their profile yet.

Computer Vision

I am a Research Scientist at NAVER LABS Europe in the Computer Vision group. My research is focused on computer vision and machine learning, and in particular on human understanding tasks (human pose estimation, action recognition, etc.).

 

PUBLICATIONS

Please check my google scholar profile.

I graduated from ENS Cachan and obtained an MSc degree from Université Joseph Fourier (France) in 2012. I then worked as a doctoral candidate in the lear/thoth teams at Inria Grenoble and received my PhD from Université Grenoble Alpes in 2016, which I did under the supervision of Zaid Harchaoui and Cordelia Schmid.  I joined the Xerox Research Centre Europe in 2016, which was acquired by Naver Labs in 2017.

 

  • December 2020. Two papers at NeurIPS 2020. The first one, SuperLoss, is about an easy-to-use module to perform curriculum learning for any task which leads to robustness to noise in the training data. The second one, MoCHi, is about mixing negatives in the feature space for self-supervised contrastive learning.
  • November 2020. One paper being presenting as on oral at 3DV 2020 on building a benchmark of 3D pose estimation in the wild by leveraging videos of static humans.
  • August 2020. Two papers at ECCV 2020. The first one is entitled DOPE and is about whole-body 3D human pose estimation in the wild (whole-body = body + hand + face). Test code is available online. The second one summarizes the main results on the hand pose estimation challenge that took place at ICCV 2019.
  • December 2019. New paper on arXiv on benchmarking action recognition methods trained on Kinetics on mimed actions. Mimetics dataset released.
  • December 2019. R2D2 training and testing code is now online.
  • September 2019. Our R2D2 paper has been accepted as an oral at NeurIPS 2019! Preliminary version of the paper available on arXiv.
  • June 2019. We won the local feature track of long-term visualization challenge: our method named R2D2 will be presented during the CVPR'19 workshop.
  • June 2019. Two papers to be presented at CVPR 2019. The first one on visual localization from objects-of-interest (see the blog here and the VirtualGallery dataset there). The second one on action recognition (see the blog here, and the code is available there).
  • January 2019. LCR-Net++ has now appeared in PAMI.
  • September 2018. 1 demo accepted at ECCV 2018 on a real-time version of LCR-Net++ for real-time multi-person 2D/3D pose estimation in the wild from a single RGB image, with now ResNet backbone and pytorch code (available online) leading to better performance
  • June 2018. 1 demo accepted at CVPR 2018 on a real-time version of LCR-Net
  • March 2018. A newer and better version of LCR-Net called LCR-Net++ available on arXiv
  • March 2018. 1 paper accepted at CVPR 2018 on action recognition from pose motion
  • October 2017. Training and test code for the ACT-Detector ICCV'17 paper on action detection is now available
  • July 2017. 2 papers accepted at ICCV 2017
  • July 2017. test code for our LCR-Net CVPR'17 paper on joint 2D-3D human pose detection is now available.
  • March 2017. 1 paper accepted at CVPR 2017

 

+33 (0)4 76 61 41 69

This web site uses cookies for the site search, to display videos and for aggregate site analytics.

Learn more about these cookies in our privacy notice.

blank

Cookie settings

You may choose which kind of cookies you allow when visiting this website. Click on "Save cookie settings" to apply your choice.

FunctionalThis website uses functional cookies which are required for the search function to work and to apply for jobs and internships.

AnalyticalOur website uses analytical cookies to make it possible to analyse our website and optimize its usability.

Social mediaOur website places social media cookies to show YouTube and Vimeo videos. Cookies placed by these sites may track your personal data.

blank