ITSC 2025 Paper Abstract

Close

Paper TH-EA-T28.6

Zhao, Zhiguo (Tongji University), Zhao, Cong (Tongji University), Chen, Kun (Tongji University), Hu, Xiaoxi (Beijing Jiaotong University), Du, Yuchuan (Tongji University), Ji, Yuxiong (Tongji University)

EvMVX-Net: Uncertainty-Aware 3D Object Detection Via Evidential Regression and Mixture of Normal-Inverse Gamma Distributions

Scheduled for presentation during the Regular Session "S28b-Multi-Sensor Fusion and Perception for Robust Autonomous Driving" (TH-EA-T28), Thursday, November 20, 2025, 14:50−15:30, Stradbroke

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Advanced Sensor Fusion for Robust Autonomous Vehicle Perception, Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles, Lidar-based Mapping and Environmental Perception for ITS Applications

Abstract

Reliable 3D object detection is critical for safety in Autonomous Driving (AD) systems, especially in the presence of sensor noise. Traditional multi-modal 3D object detection methods based on LiDAR and camera inputs often ignore uncertainty modeling, limiting their trustworthiness in real-world deployment. To address this challenge, we propose EvMVX-Net, a novel uncertainty-aware multi-modal 3D object detection framework that incorporates evidential deep learning and Normal-Inverse Gamma (NIG) distribution-based regression. Built upon the MVX-Net architecture, EvMVX-Net introduces an auxiliary LiDAR-only head alongside the standard fusion head, both modeled with NIG to estimate uncertainty. To effectively integrate the outputs and uncertainties from both branches, we incorporate a Mixture of NIGs (MoNIG) fusion strategy, enabling robust and trustworthy object detection. We validate EvMVX-Net on the KITTI benchmark and demonstrate its superiority over both sampling-based and sampling-free baselines. The proposed method achieves up to 5.56% improvement in average precision (AP) while providing calibrated uncertainty estimates, as evaluated by Expected Calibration Error (ECE). Extensive ablation studies further confirm the effectiveness of each proposed component. Our results highlight the importance of uncertainty modeling for reliable multi-modal perception in safety-critical applications.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:39:14 PST  Terms of use