Abstract

Teaser image

Sensor setups of robotic platforms commonly include both camera and LiDAR as they provide complementary information. However, fusing these two modalities typically requires a highly accurate calibration between them. In this paper, we propose MDPCalib which is a novel method for camera-LiDAR calibration that requires neither human supervision nor any specific target objects. Instead, we utilize sensor motion estimates from visual and LiDAR odometry as well as deep learning-based 2D-pixel-to-3D-point correspondences that are obtained without in-domain retraining. We represent the camera-LiDAR calibration as a graph optimization problem and minimize the costs induced by constraints from sensor motion and point correspondences. In extensive experiments, we demonstrate that our approach yields highly accurate extrinsic calibration parameters and is robust to random initialization. Additionally, our approach generalizes to a wide range of sensor setups, which we demonstrate by employing it on various robotic platforms including a self-driving perception car, a quadruped robot, and a UAV.

Technical Approach

Overview of our approach

In this work, we propose a novel method for automatic target-less camera-LiDAR calibration, thus requiring neither human initialization nor special data recording. Our approach, called MDPCalib, processes two input streams of RGB images and 3D point clouds. The first step comprises a coarse registration based on sensor motion estimates from visual and LiDAR odometry. Assuming time-synchronized sensors, we match sensor poses that serve as constraints in a graph optimization problem. In the second step, we utilize a neural network that, given the obtained initial calibration parameters, predicts 2D pixel to 3D point correspondences from images and point clouds. We add these correspondences as additional constraints to the optimization problem. A final round of optimizing the graph generates the overall transformation matrix.

Video

Code

A software implementation of this project can be found in our GitHub repository for academic usage and is released under the GPLv3 license. For any commercial purpose, please contact the authors.

Publications

If you find our work useful, please consider citing our paper:

Kürsat Petek, Niclas Vödisch, Johannes Meyer, Daniele Cattaneo, Abhinav Valada, and Wolfram Burgard
Automatic Target-Less Camera-LiDAR Calibration from Motion and Deep Point Correspondences
arXiv preprint arXiv:2403.11761, 2024.

(PDF) (BibTeX)

Authors

Kürsat Petek

Kürsat Petek

University of Freiburg

Niclas Vödisch

Niclas Vödisch

University of Freiburg

Johannes Meyer

Johannes Meyer

University of Freiburg

Daniele Cattaneo

Daniele Cattaneo

University of Freiburg

Abhinav Valada

Abhinav Valada

University of Freiburg

Wolfram Burgard

Wolfram Burgard

University of Technology Nuremberg

Acknowledgment

This work was funded by the German Research Foundation (DFG) Emmy Noether Program grant No 468878300 and an academic grant from NVIDIA.