ITSC 2024 Paper Abstract

Close

Paper WeBT3.1

HAN, XU (Hong Kong University of Science and Technology (Guangzhou)), YANG, Qiannan (Hong Kong university of science and technology (Guangzhou)), CHEN, Xianda (HKUST(GZ)), Chu, Xiaowen (The Hong Kong University of Science and Technology (Guangzhou)), Zhu, Meixin (HKUST)

Generating and Evolving Reward Functions for Highway Driving with Large Language Models

Scheduled for presentation during the Invited Session "AI-Enhanced Safety-Certifiable Autonomous Vehicles" (WeBT3), Wednesday, September 25, 2024, 14:30−14:50, Salon 6

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 3, 2024

Keywords Advanced Vehicle Safety Systems, Automated Vehicle Operation, Motion Planning, Navigation, Theory and Models for Optimization and Control

Abstract

Reinforcement Learning (RL) plays a crucial role in advancing autonomous driving technologies by maximizing reward functions to achieve the optimal policy. However, crafting these reward functions has been a complex, manual process in many practices. To reduce this complexity, we introduce a novel framework that integrates Large Language Models (LLMs) with RL to improve reward function design in autonomous driving. This framework utilizes the coding capabilities of LLMs, proven in other areas, to generate and evolve reward functions for highway scenarios. The framework starts with instructing LLMs to create an initial reward function code based on the driving environment and task descriptions. This code is then refined through iterative cycles involving RL training and LLMs' reflection, which benefits from their ability to review and improve the output. We have also developed a specific prompt template to improve LLMs' understanding of complex driving simulations, ensuring the generation of effective and error-free code. Our experiments in a highway driving simulator across three traffic configurations show that our method surpasses expert handcrafted reward functions, achieving a 22% higher average success rate. This not only indicates safer driving but also suggests significant gains in development productivity.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-03  04:52:54 PST  Terms of use