Results to be presented at Long Beach, California on June 17th at CVPR, the world’s premier event in computer vision.
Visual localization is about estimating the camera pose from which an image was taken and is a key technology in many applications for robotics and self-driving cars, Augmented Reality and Virtual Reality. The method invented by NAVER LABS Europe called R2D2* came out top on the international challenge specific to local feature detection and matching which are fundamental steps in these applications.
‘Classical methods are based on a “detect-then-describe” paradigm where separate, handcrafted methods are used to first identify repeatable keypoints and then represent them with a local descriptor. We argue that salient regions are not necessarily discriminative and can therefore actually harm the performance of the description. We claim that descriptors should be learned only in regions for which matching can be performed with high confidence,’ explains Jérôme Revaud, the main author of R2D2.
‘We’re proud the novel approach we developed, clearly outperforms existing methods in addressing a fundamental step in the creation of computer vision applications and services’ says Martin Humenberger, the 3D Vision tech lead in France.
Many factors are key in winning a challenge and teamwork is one of them. The researchers who collaborated on the visual localization and related tasks are Gabriela Csurka, Philippe Weinzaepfel, Noé Pion, Cesar De Souza, Yohann Cabon, Julien Morat, Nicolas Guerin and Philippe Rerole.
The work that you can find here will be presented in detail by Jérôme Revaud at the workshop on Long-Term Visual Localization under Changing Conditions at CVPR which is held in Long Beach, California from June 15th – 20th.
More information on the challenge: https://www.visuallocalization.net/workshop/cvpr/2019/
*Reliable and Repeatable Detectors and Descriptors’ for Joint Sparse Keypoint Detection and Local Feature Extraction