Paper WeAT5.6
Nguyen, Thai (Monash University), Luu, Quang-Hung (Monash University), Vu, Hai L. (Monash University)
MetaCamFuse: A Framework for Evaluating Autonomous Driving Perceptions
Scheduled for presentation during the Invited Session "Self-Assessment of Perception Systems" (WeAT5), Wednesday, September 25, 2024,
12:10−12:30, Salon 13
2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada
This information is tentative and subject to change. Compiled on October 3, 2024
|
|
Keywords Sensing, Vision, and Perception, Driver Assistance Systems
Abstract
In the pursuit of developing advanced autonomous vehicles (AVs), the perception system plays a crucial role in enabling the vehicle to understand and interpret its surroundings. The debate between using camera-only or multiple sensors (or multi-modal) fusion systems for perception in AVs remains a contentious issue. Moreover, current research in AVs primarily emphasizes enhancing accuracy, which does not sufficiently address the robustness of those systems under less controlled and more challenging real-world conditions, such as varying weather and lighting scenarios. This study examines the robustness of camera-only perception and multi-modal fusion systems by taking advantage of a mainstream testing methodology, Metamorphic Testing (MT). MT can work without the need for ground-truth datasets and hence applies to a wide range of driving scenarios. To this end, we have proposed MetaCamFuse, a metamorphic testing framework that leverages a new metric called “metamorphic robustness ratio” (MRR) to measure the robustness of a system against the change in the inputs. The MRR is then applied to evaluate and compare the robustness of two state-of-the-art camera-only perception systems and two multi-modal fusion (MMF) systems. Our findings indicate that the robustness of all four systems declines as the level of changes in either brightness, darkness, or speedy effect increases. While MMF systems are anticipated to be more robust against variations in camera inputs (benefiting from stable LiDAR point cloud data), this advantage is primarily observed in detecting car objects under the speedy effect. Furthermore, we illustrate that possessing a more advanced deep learning system with superior accuracy does not necessarily correlate with equivalent levels of robustness. The results also indicate that MetaCamFuse can help construct an effective robustness measure to uncover inconsistencies and failures of safety-critical systems that are not detectable by conventional testing methods. This will provide more useful information for industrial manufacturers when selecting the right technology.
|
|