Paper FrBT10.3
Beringhoff, Felix (Volkswagen AG), Greenyer, Joel (FHDW Hannover), Roesener, Christian (Volkswagen AG), Tichy, Matthias (Ulm University)
Realizing Scenario-Based Verification Tests of Automated Vehicles with an AI-Controlled Surrounding Vehicle in a Practice-Relevant Context
Scheduled for presentation during the Regular Session "Generating driving scenarios II" (FrBT10), Friday, September 27, 2024,
14:10−14:30, Salon 18
2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada
This information is tentative and subject to change. Compiled on December 26, 2024
|
|
Keywords Simulation and Modeling, Other Theories, Applications, and Technologies
Abstract
Scenario-based testing is seen as a key for the verification and validation of automated vehicles (AV). In a test scenario, the AV is tested under pre-defined traffic conditions. However, realizing those traffic conditions is challenging due to the autonomously behaving AV. The AV decides on its own, whether it gets into the test scenario conditions at all, e.g., by deciding on which lane it wants to drive or at which velocity it is driving. In order to influence the AV's decision making in a required way to realize a test scenario condition, we implement a novel AI method for controlling a surrounding vehicle (SV) of the AV. The AI-controlled SV (AISV) consists of a reinforcement learning (RL) agent which is trained to, e.g., nudge the AV into changing lanes. In contrast to current common practice, this approach does not require the manual tailoring of triggers and actions for controlling the SV. In this paper, we report on a working design of the RL framework and experiments regarding three different training scopes for the RL agent. We distinguish specialized agents which are trained to reach a single scenario condition and two sorts of generalized agents which shall be capable of reaching a set of scenario conditions. The results show that specialized agents perform the best with a success rate of up to 100 %. However, the generalized agents perform better in realizing scenario conditions which are not known to the agent from training. We also report on an implementation of the approach in a hardware-in-the-loop-simulation (HIL) test bench used in industrial practice and discuss a first try-out.
|
|