ITSC 2024 Paper Abstract

Close

Paper ThBT10.5

Ahmic, Kenan (German Aerospace Center (DLR)), Ultsch, Johannes (German Aerospace Center (DLR)), Brembeck, Jonathan (German Aerospace Center (DLR)), Burschka, Darius (Technical University Munich)

Multi-Agent Reinforcement Learning for Cooperative Vehicle Motion Control

Scheduled for presentation during the Regular Session "Multi-autonomous Vehicle Studies, Models, Techniques and Simulations I" (ThBT10), Thursday, September 26, 2024, 15:50−16:10, Salon 18

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 7, 2024

Keywords Multi-autonomous Vehicle Studies, Models, Techniques and Simulations, Cooperative Techniques and Systems, Automated Vehicle Operation, Motion Planning, Navigation

Abstract

The longitudinal and lateral low-level motion control of multiple vehicles within a platoon is a challenging task, since several different control objectives need to be solved: (i) Each vehicle in the platoon needs to follow the reference path, (ii) the leading vehicle needs to drive with a desired reference velocity, and (iii) the following vehicles need to maintain a safe spacing distance to their respective preceding vehicle. Typically, several distinct controllers are developed for each task individually, which increases both the engineering effort as well as the susceptibility to errors. We address this issue and present a cooperative low-level vehicle motion controller based on Multi-Agent Reinforcement Learning (MARL) that is able to solve all of the above-mentioned control objectives for both the leading vehicle and the following vehicles. Therefore, we apply parameter sharing within MARL to update a single control policy in a centralized fashion using the experiences of all vehicles in the environment. Additionally, we utilize the concept of agent indication during the training process and enable the policy to specialize on the control objectives of the vehicle it is currently controlling. This leads to a unifying control approach and makes the development of further controllers redundant. The simulative assessment demonstrates the effectiveness of learned policy and shows that it is able to successfully solve all of the above-mentioned control objectives for both vehicles roles, even on unseen paths.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-07  22:47:08 PST  Terms of use