Paper FrBT6.4
Guo, Xinwei (Michigan State University), Kent, Daniel (Michigan State University), Lu, Xiaohu (Michigan State University), Radha, Hayder (Michigan State University)
A Taxonomization and Comparative Evaluation of Targetless Camera-Lidar Calibration for Autonomous Vehicles
Scheduled for presentation during the Regular Session "LiDAR-based perception" (FrBT6), Friday, September 27, 2024,
14:30−14:50, Salon 14
2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada
This information is tentative and subject to change. Compiled on January 13, 2025
|
|
Keywords Sensing, Vision, and Perception, Advanced Vehicle Safety Systems, Driver Assistance Systems
Abstract
The state of the art in autonomous vehicle technology has been advanced due to progress in multiple disciplines, including multi-modal object detection algorithms. With the falling cost of multi-sensor fusion hardware, namely camera and lidar, combined with state-of-the-art fusion-based detection algorithms, camera-lidar data produce superior perception results. However, fusing camera and lidar data requires known extrinsic calibration parameters to properly combine these modalities, which can change during an autonomous vehicle's operation. In this paper, we taxonomize the leading targetless calibration methods into three categories based on their underlying algorithms, namely feature, information theory, and learning-based methods. To showcase the impact of selecting a specific automatic targetless calibration method, we evaluate the robustness of each specific method in the context of multi-modal object detection. We demonstrate that the effects of miscalibration can cause severe degradations in performance, even with seemingly small changes in calibration parameters. We also find that most recent learning-based camera-lidar calibration methods lead to equivalent or superior 3D object detection performance when compared with state-of-the-art feature and information theory-based calibration methods. To the best of our knowledge, this work represents a first attempt at analyzing the impact of camera-lidar miscalibration on the performance of multi-modal object detection frameworks.
|
|