Abstract
LiDAR place recognition is a critical capability for autonomous navigation and cross-modal localization in large-scale outdoor environments.
Existing approaches predominantly depend on pre-built 3D dense maps or aerial imagery,
which impose significant storage overhead and lack real-time adaptability.
In this paper, we propose OPAL, a novel network for LiDAR place recognition that leverages OpenStreetMap
as a lightweight and up-to-date prior. Our key innovation lies in bridging the domain disparity
between sparse LiDAR scans and structured OSM data through two carefully designed components:
a cross-modal visibility mask that identifies maximal observable regions from both modalities to guide feature learning,
and an adaptive radial fusion module that dynamically consolidates multiscale radial features into discriminative global descriptors.
Extensive experiments on the augmented KITTI and KITTI-360 datasets demonstrate OPAL's superiority,
achieving 15.98% higher recall at @1m threshold for top-1 retrieved matches while operating at 12x faster inference speeds
compared to state-of-the-art approaches.
Introduction Video
We are working hard to make the introduction video.
Acknowledgements:
We borrow this template from FreeReg.