Philippe Weinzaepfel | Naver Labs Europe
loader image
Change your cover photo
Change your cover photo
This user account status is Approved

This user has not added any information to their profile yet.

Computer Vision

I am a Research Scientist at NAVER LABS Europe in the Computer Vision group. My research is focused on computer vision and machine learning, and in particular on human understanding tasks (human pose estimation, action recognition, etc.).



Please check my google scholar profile.

I graduated from ENS Cachan and obtained an MSc degree from Université Joseph Fourier (France) in 2012. I then worked as a doctoral candidate in the lear/thoth teams at Inria Grenoble and received my PhD from Université Grenoble Alpes in 2016, which I did under the supervision of Zaid Harchaoui and Cordelia Schmid.  I joined the Xerox Research Centre Europe in 2016, which was acquired by Naver Labs in 2017.


  • December 2019. New paper on arXiv on benchmarking action recognition methods trained on Kinetics on mimed actions. Mimetics dataset released.
  • December 2019. R2D2 training and testing code is now online.
  • September 2019. Our R2D2 paper has been accepted as an oral at NeurIPS 2019! Preliminary version of the paper available on arXiv.
  • June 2019. We won the local feature track of long-term visualization challenge: our method named R2D2 will be presented during the CVPR'19 workshop.
  • June 2019. Two papers to be presented at CVPR 2019. The first one on visual localization from objects-of-interest (see the blog here and the VirtualGallery dataset there). The second one on action recognition (see the blog here, and the code is available there).
  • January 2019. LCR-Net++ has now appeared in PAMI.
  • September 2018. 1 demo accepted at ECCV 2018 on a real-time version of LCR-Net++ for real-time multi-person 2D/3D pose estimation in the wild from a single RGB image, with now ResNet backbone and pytorch code (available online) leading to better performance
  • June 2018. 1 demo accepted at CVPR 2018 on a real-time version of LCR-Net
  • March 2018. A newer and better version of LCR-Net called LCR-Net++ available on arXiv
  • March 2018. 1 paper accepted at CVPR 2018 on action recognition from pose motion
  • October 2017. Training and test code for the ACT-Detector ICCV'17 paper on action detection is now available
  • July 2017. 2 papers accepted at ICCV 2017
  • July 2017. test code for our LCR-Net CVPR'17 paper on joint 2D-3D human pose detection is now available.
  • March 2017. 1 paper accepted at CVPR 2017


+33 (0)4 76 61 41 69