Abstract
OpenStreetMap (OSM), an online and versatile source of volunteered geographic information (VGI), is widely used for
human self-localization by matching nearby visual observations with vectorized map data. However, due to the divergence in modalities and views,
image-to-OSM (I2O) matching and localization remain challenging for robots, preventing the full utilization of VGI data in the unmanned ground vehicles
and logistic industry. Inspired by the fact that the human brain relies on different regions when processing geometric and semantic information for spatial localization
tasks, in this paper, we propose the OSMLoc. OSMLoc is a brain-inspired monocular visual localization method with semantic and geometric guidance to
improve accuracy, robustness, and generalization ability. First, we equip the OSMLoc with the visual foundational model to extract powerful image features.
Second, a geometry-guided depth distribution adapter is proposed to bridge the monocular depth estimation and camera-to-BEV transform.
Thirdly, the semantic embeddings from the OSM data are utilized as auxiliary guidance for image-to-OSM feature matching.
To validate the proposed OSMLoc, we collect a worldwide cross-area and cross-condition (CC) benchmark for extensive evaluation.
Experiments on the MGL dataset, CC validation benchmark, and KITTI dataset have demonstrated the superiority of our method.
Introduction Video
We are working hard to make the introduction video.
Acknowledgements:
We borrow this template from FreeReg.