ITSC 2024 Paper Abstract

Close

Paper WeBT14.4

Liu, Ke (university of california berkeley), Hu, Fan (University of California, Berkeley), hui, lin (northwestern university), Cheng, Xi (University of Illinois at Chicago), Chen, Jianan (University of British Columbia), Song, Jilin (University of Toronto), Feng, Siyuan (The Hong Kong Polytechnic University), Su, Gaofeng (University of California, Berkeley), Zhu, Chen (Tsinghua University)

Deep Reinforcement Learning for Real-Time Ground Delay Program Revision and Corresponding Flight Delay Assignments

Scheduled for presentation during the Poster Session "Air Traffic Management" (WeBT14), Wednesday, September 25, 2024, 14:30−16:30, Foyer

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on December 26, 2024

Keywords Air Traffic Management, Off-line and Online Data Processing Techniques

Abstract

This paper explores the optimization of Ground Delay Programs (GDP), a prevalent Traffic Management Initia- tive used in Air Traffic Management (ATM) to reconcile capacity and demand discrepancies at airports. Employing Reinforce- ment Learning (RL) to manage the inherent uncertainties in the national airspace system—such as weather variability, fluc- tuating flight demands, and airport arrival rates—we developed two RL models: Behavioral Cloning (BC) and Conservative Q-Learning (CQL). These models are designed to enhance GDP efficiency by utilizing a sophisticated reward function that integrates ground and airborne delays and terminal area congestion. We constructed a simulated single-airport envi- ronment, SAGDP ENV , which incorporates real operational data along with predicted uncertainties to facilitate realistic decision-making scenarios. Utilizing the whole year 2019 data from Newark Liberty International Airport (EWR), our models aimed to preemptively set airport program rates. Despite thor- ough modeling and simulation, initial outcomes indicated that the models struggled to learn effectively, attributed potentially to oversimplified environmental assumptions. This paper discusses the challenges encountered, evaluates the models’ performance against actual operational data, and outlines future directions to refine RL applications in ATM.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-12-26  13:06:25 PST  Terms of use