ITSC 2025 Paper Abstract

Close

Paper WE-EA-T4.2

Li, Zhengyi (Jilin University, China), Hu, Hongyu (Jilin University), Xing, Yang (Cranfield University), Lv, Chen (Nanyang Technological University)

BatchEnsemble-Based Perception Uncertainty Quantification for Autonomous Vehicles

Scheduled for presentation during the Regular Session "S04b-Intelligent Perception and Detection Technologies for Connected Mobility" (WE-EA-T4), Wednesday, November 19, 2025, 13:50−14:10, Surfers Paradise 1

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 19, 2025

Keywords Real-time Object Detection and Tracking for Dynamic Traffic Environments, Lidar-based Mapping and Environmental Perception for ITS Applications, Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles

Abstract

Quantifying uncertainty significantly enhances the reliability of perception in autonomous vehicles and provides more comprehensive environmental information for downstream modules. However, most existing perception methods lack the capacity to effectively estimate the associated uncertainty. To address this gap, we propose a BatchEnsemble-based network for uncertainty quantification in 3D object detection using point cloud data. Specifically, a BatchEnsemble-based convolutional layer is designed to reduce the memory overhead associated with ensemble-based paradigms. Building upon this, a series of probabilistic object detection networks are constructed by directly modeling object attributes using multivariate Gaussian distributions, thereby enabling the parallel extraction of both object features and their associated variances. Subsequently, an uncertainty-aware fusion strategy is introduced to integrate and filter multiple detection results based on an uncertainty quantification metric—namely, the Uncertainty Index—thereby yielding more reliable and comprehensive outputs. The proposed method is validated on the KITTI dataset. Experimental results demonstrate its competitive accuracy performance and effectiveness across various scenarios, including objects of differing detection difficulty, identification of false-positive results, and under adverse conditions such as snowy weather and sensor degradation.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-19  16:55:04 PST  Terms of use