ITSC 2025 Paper Abstract

Close

Paper TH-LM-T25.1

Wei, Chuheng (University of California, Riverside), Qin, Ziye (Southwest Jiaotong University), Zimmer, Walter (Technical University of Munich (TUM)), Wu, Guoyuan (University of California-Riverside), Barth, Matthew (University of California-Riverside)

HeCoFuse: Cross-Modal Complementary V2X Cooperative Perception with Heterogeneous Sensors

Scheduled for presentation during the Regular Session "S25a-Cooperative and Connected Autonomous Systems" (TH-LM-T25), Thursday, November 20, 2025, 10:30−10:50, Cooleangata 4

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Cooperative Driving Systems and Vehicle Coordination in Multi-vehicle Scenarios, Advanced Sensor Fusion for Robust Autonomous Vehicle Perception, Lidar-based Mapping and Environmental Perception for ITS Applications

Abstract

Real-world Vehicle-to-Everything (V2X) cooperative perception systems often operate under heterogeneous sensor configurations due to cost constraints and deployment variability across vehicles and infrastructure. This heterogeneity poses significant challenges for feature fusion and perception reliability. To address these issues, we propose HeCoFuse, a unified framework designed to enable effective cooperative perception across diverse sensor combinations, a unified framework designed for cooperative perception across mixed sensor setups—where nodes may carry Cameras (C), LiDARs (L), or both. By introducing a hierarchical fusion mechanism that adaptively weights features through a combination of channel-wise and spatial attention, HeCoFuse can tackle critical challenges such as cross-modality feature misalignment and imbalanced representation quality. In addition, an adaptive spatial resolution adjustment module is employed to balance computational cost and fusion effectiveness. To enhance robustness across different configurations, we further implement a cooperative learning strategy that dynamically adjusts fusion type based on available modalities. Experiments on the real-world TUMTraf-V2X dataset demonstrate that HeCoFuse achieves 43.22% 3D mAP under the full sensor configuration (LC+LC), outperforming the CoopDet3D baseline by 1.17%, and reaches an even higher 43.38% 3D mAP in the L+LC scenario, while maintaining 3D mAP in the range of 21.74%-43.38% across nine heterogeneous sensor configurations. These results, validated by our first-place finish in the CVPR 2025 DriveX challenge, establish HeCoFuse as the current state-of-the-art on TUM-Traf V2X dataset while demonstrating robust performance across diverse sensor deployments.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:39:14 PST  Terms of use