ITSC 2025 Paper Abstract

Close

Paper FR-EA-T40.6

Jiang, Xiyan (Tongji University), Zhao, Xiaocong (Tongji university), Liu, Yiru (Tongji University), Hang, Peng (Tongji University), Sun, Jian (Tongji University)

Every Scene All at Once: Exhaustive Multi-Agent Interaction Generation with Controlled Diffusion Model

Scheduled for presentation during the Regular Session "S40b-Cooperative and Connected Autonomous Systems" (FR-EA-T40), Friday, November 21, 2025, 14:50−15:30, Cooleangata 4

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Cooperative Driving Systems and Vehicle Coordination in Multi-vehicle Scenarios, Methods for Verifying Safety and Security of Autonomous Traffic Systems

Abstract

Autonomous vehicles (AVs) must safely navigate a vast range of complex multi-agent interactions for widespread deployment. However, their decision-making systems are typically trained on observational data that capture only a single realized outcome for any given event, leaving them blind to the full spectrum of plausible, yet unobserved, alternatives. This "mode collapse" severely limits their ability to anticipate and handle rare but safety-critical scenarios. Here we introduce a generative framework that overcomes this limitation by systematically enumerating and synthesizing the complete set of all dynamically feasible multi-agent interaction modes for a given scene. Our method first decomposes complex interactions into negotiations at conflict points and enumerates all possible passage-order permutations. We then use a graph-based spatiotemporal optimization to compute kinematically feasible anchor points for each permutation. These anchors subsequently guide a controllable diffusion model to generate realistic, full-scene trajectories representing every possible interaction mode. Using naturalistic driving data, we demonstrate that our framework generates comprehensive sets of interaction modes, including low-probability counterfactuals that are physically and socially plausible. For instance, in a complex three-vehicle interaction, our model generated all three dynamically possible outcomes while correctly identifying a fourth, seemingly plausible mode as infeasible due to subtle environmental constraints. By creating datasets that encompass all possible futures, our approach provides a critical tool for robust testing and development of AV decision-making systems, enhancing their safety in real-world environments.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:18:55 PST  Terms of use