ITSC 2025 Paper Abstract

Close

Paper FR-LM-T43.6

Peng, Xiangyuan (Infineon Technologies AG, Technical University of Munich), Wang, Yu (Technical University of Munich), Tang, Miao (China University of Geosciences), Bierzynski, Kay (Infineon Technologies AG), Servadei, Lorenzo (Technical University of Munich), Wille, Robert (Technical University of Munich)

MoRAL: Motion-Aware Multi-Frame 4D Radar and LiDAR Fusion for Robust 3D Object Detection

Scheduled for presentation during the Regular Session "S43a-Multi-Sensor Fusion and Perception for Robust Autonomous Driving" (FR-LM-T43), Friday, November 21, 2025, 12:10−12:30, Stradbroke

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Advanced Sensor Fusion for Robust Autonomous Vehicle Perception, Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles, Real-time Object Detection and Tracking for Dynamic Traffic Environments

Abstract

Reliable autonomous driving systems require accurate detection of traffic participants. To this end, multi-modal fusion has emerged as an effective strategy. In particular, 4D radar and LiDAR fusion methods based on multi-frame radar point clouds have demonstrated the effectiveness in bridging the point density gap. However, they often neglect radar point clouds' inter-frame misalignment caused by object movement during accumulation and do not fully exploit the object dynamic information from 4D radar. In this paper, we propose MoRAL, a motion-aware multi-frame 4D radar and LiDAR fusion framework for robust 3D object detection. First, a Motion-aware Radar Encoder (MRE) is designed to compensate for inter-frame radar misalignment from moving objects. Later, a Motion Attention Gated Fusion (MAGF) module integrate radar motion features to guide LiDAR features to focus on dynamic foreground objects. Extensive evaluations on the View-of-Delft (VoD) dataset demonstrate that MoRAL outperforms existing methods, achieving the highest mAP of 73.30% in the entire area and 88.68% in the driving corridor. Notably, our method also achieves the best AP of 69.67% for pedestrians in the entire area and 96.25% for cyclists in the driving corridor. The code is available at: https://github.com/RealYuWang/MoRAL.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:12:34 PST  Terms of use