ITSC 2024 Paper Abstract

Close

Paper FrAT13.8

Zhao, Yang (University of Electronic Science and Technology of China), Jin, Yiwei (University of Electronic Science and Technology of China), Deng, Ruoyu (Chengdu Technological University), Tao, Yueming (University of Electronic Science and Technology of China), Peng, Zhinan (University of Electronic Science and Technology of China), Zhan, Huiqin (University of Electronic Science and Technology of China), Cheng, Hong (University of Electronics Science and Technology of China)

A Novel 4D Radar and Image Fusion for 3D Object Detection in Autonomous Driving

Scheduled for presentation during the Poster Session "3D Object Detection" (FrAT13), Friday, September 27, 2024, 10:30−12:30, Foyer

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on December 26, 2024

Keywords Sensing, Vision, and Perception, Multi-modal ITS

Abstract

3D object detection is a crucial task in the perception module of autonomous driving, current research employs LiDAR or imagery for 3D object detection. However, imagery lacks depth information, and LiDAR has limited robustness in adverse weather conditions. 4D millimeter-wave radar, as a promising new type of sensor, not only provides high-precision Doppler velocity information but also, with enhanced elevation resolution, can obtain accurate height information of targets. Compared to high-beam LiDAR, the point cloud is still sparse, making it challenging to obtain precise semantic information, thus posing a significant challenge for 3D object detection using 4D millimeter-wave radar. This paper introduces a 3D object detection framework that fuses 4D millimeter-wave radar with imagery—4D Radar Fusion. By transforming both 4D millimeter-wave radar and monocular imagery into the BEV space for feature fusion, it effectively enhances the semantic information learning of the 4D millimeter-wave radar. Experimental evaluation on the autonomous driving dataset VoD demonstrates the effectiveness of 4D Radar Fusion, achieving a 6% improvement in 3D average precision over the baseline for the entire annotated area.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-12-26  16:34:33 PST  Terms of use