ITSC 2024 Paper Abstract

Close

Paper WeBT1.1

He, Shanglu (Nanjing University of Science and Technology), Luo, Kaijie (Nanjing University of Science and Technology), Ye, Mao (Nanjing University of Science and Technology), Peng, Fuming (Nanjing University of Science and Technology), Dong, Zhaozhi (Nanjing Golden dragon Bus Co., Ltd), Liu, Lijun (Nanjing Golden dragon Bus Co., Ltd), Liang, Yu (Nanjing University of Science and Technology)

A Spatial-Temporal Graph Neural Network-Based Human-Like Lane Changing Decision Model for the Autonomous Vehicle

Scheduled for presentation during the Invited Session "Learning-empowered Intelligent Transportation Systems: Foundation Vehicles and Coordination Technique II" (WeBT1), Wednesday, September 25, 2024, 14:30−14:50, Salon 1

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 8, 2024

Keywords Automated Vehicle Operation, Motion Planning, Navigation, Traffic Theory for ITS, Data Mining and Data Analysis

Abstract

Lane changing decision (LCD) is an essential procedure for the operation of an autonomous vehicle (AV). The modeling of LCD is more complex and has more challenges compared with the modeling of car-following behaviors, because it needs to consider more about the interactions among the target lane-changing vehicle and its surrounding traffic environment. It would be a potential and effective solution for an AV to make the LCD by mimicking the LCD behavior of a human driver. Therefore, this study proposed a spatial-temporal graph neural network (STGNN)-based LCD model for AVs, which learns the LCD from the naturalistic driving data. Specifically, a graph structure describing the relationship between the target lane-changing vehicle and those nearby vehicles was constructed. And then, the graph structured data were generated from the trajectory data which were from a naturalistic driving dataset named “Ubiquitous Traffic Eyes”. Those preprocessed and transformed data would feed into the STGNN model. Furtherly, the STGNN-based LCD model was built up based on the Recurrent Neural Network, which integrated the Graph Attention Networks (GAT) and the Gate Recurrent Unit (GRU). GAT was used to capture the spatial features, while GRU was responsible for the temporal features. And thus, the proposed model enhances spatial-temporal feature capture in LCD. Using the field data, the proposed STGNN-based LCD model was trained, tested, validated, and compared. And it achieves 86.5% accuracy in the validation, surpassing the LSTM-based model. To some extent, it indicated that the proposed model provides an alternative way for the AV to imitate the human drivers’ lane-changing decision correctly.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-08  13:59:49 PST  Terms of use