Paper TH-LM-T22.3
Nguyen, Thai (Monash University), Luu, Quang-Hung (Monash University), Vu, Hai L. (Monash University)
Modal Contributions to the Robustness of Fusion Perceptions in Autonomous Driving
Scheduled for presentation during the Invited Session "S22a-Emerging Trends in AV Research" (TH-LM-T22), Thursday, November 20, 2025,
11:10−11:30, Coolangata 1
2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia
This information is tentative and subject to change. Compiled on October 18, 2025
|
|
Keywords Autonomous Vehicle Safety and Performance Testing, Advanced Sensor Fusion for Robust Autonomous Vehicle Perception, Verification of Autonomous Vehicle Sensor Systems in Real-world Scenarios
Abstract
Fusion systems combining inputs from multiple sensors (e.g., LiDAR and cameras) are standard in autonomous vehicles (AVs). Yet, their robustness against unseen conditions is challenging to assess in the absence of ground-truth data or modality-specific degradation. In this study, we propose a novel approach to address these challenges via nine metamorphic relations simulating diverse environmental and sensor realistic conditions (snow, rain, fog, sunlight, motion, noise, shear, scale, rotation) and comparing degraded outputs to the originals. In addition, we propose a new metric called Relative Robustness Score (RSS) to quantify the decrease in system performance. Our approach is applied to two typical sensor systems, AVOD (region-level fusion) and LoGoNet (attention-based feature-level fusion), and the results reveal complex relationships between accuracy and robustness, with both models significantly affected by bad weather conditions like snow or fog. Despite superior baseline accuracy and average RRS, LoGoNet is less robust than AVOD to specific LiDAR corruptions, such as snow, rain, and sun glare. This demonstrates that accuracy does not ensure robustness across all scenarios, and modality importance varies with corruption type. Our proposed methods are built without ground-truth labels, offering real-time insights into the individual modal contribution to the robustness of the fusion systems.
|
|