Paper FR-LM-T42.2
Mengistu, Tekalign (North Carolina A&T University), Getahun, Tesfamichael (North Carolina A&T State University), Karimoddini, Ali (North Carolina A & T State University)
A Robust Radar-Camera Fusion for 2D Object Detection for Autonomous Driving
Scheduled for presentation during the Regular Session "S42a-Safety and Risk Assessment for Autonomous Driving Systems" (FR-LM-T42), Friday, November 21, 2025,
10:50−11:10, Broadbeach 3
2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia
This information is tentative and subject to change. Compiled on October 18, 2025
|
|
Keywords Autonomous Vehicle Safety and Performance Testing
Abstract
Radar-camera fusion in autonomous driving provides cost-effective yet reliable perception solutions. However, it presents significant challenges, particularly due to the heterogeneous nature of the data from the two sensors. Radar provides sparse, low-resolution data with depth and velocity information, while cameras offer dense, high-resolution visual context. Aligning and combining these fundamentally different data types in a way that preserves their complementary strengths remains a core challenge. This paper proposes a robust fusion framework that integrates radar and camera data to enhance the perception stack of autonomous vehicles, enabling them to handle complex rural and urban driving scenarios. The proposed method leverages the complementary strengths of the two sensing modalities by first encoding key radar parameters, such as range, velocity, and radar cross-sections (RCS), into color channels. Proportional heights are assigned to each radar detection based on radar cross-section, enabling more accurate spatial mapping than previous methods. Feature maps from radar and camera inputs are extracted and fused via concatenation, then refined with a bottleneck attention mechanism to highlight key channels and spatial features. Evaluated on the nuScenes dataset, the developed method outperforms current state-of-the-art radar-camera fusion models in 2D object detection, proving effective in challenging, varied environments.
|
|