Learning to Detect Scene Landmarks for Camera Localization
- Tien Do ,
- Ondrej Miksik ,
- Joseph DeGol ,
- Hyun Soo Park ,
- Sudipta Sinha
Modern camera localization methods that use image retrieval, feature matching, and 3D structure-based pose estimation require long-term storage of numerous scene images or a vast amount of image features. This can make them unsuitable for resource constrained VR/AR devices and also raises serious privacy concerns. We present a new learned camera localization technique that eliminates the need to store features or a detailed 3D point cloud. Our key idea is to implicitly encode the appearance of a sparse yet salient set of 3D scene points into a convolutional neural network (CNN) that can detect these scene points in query images whenever they are visible. We refer to these points as scene landmarks. We also show that a CNN can be trained to regress bearing vectors for such landmarks even when they are not within the camera’s field-of-view. We demonstrate that the predicted landmarks yield accurate pose estimates and that our method outperforms DSAC*, the state-of-theart in learned localization. Furthermore, extending HLoc (an accurate method) by combining its correspondences with our predictions boosts its accuracy even further.
Téléchargements de publications
Learning to Detect Scene Landmarks for Camera Localization
mai 20, 2022
Source code and data for the CVPR 2022 paper "Learning to Detect Scene Landmarks for Camera Localization".