ITSC 2025 Paper Abstract

Close

Paper TH-LM-T23.5

Liu, Zhuo (Southwest Jiaotong University), Ji, Ang (Southwest Jiaotong University), Su, Lingyun (Southwest Jiaotong University), Sun, Zhanbo (Southwest Jiaotong University), Yang, Linchuan (Southwest Jiaotong University), Kong, Mingming (Xihua University), Wang, Xi (Southwest Jiaotong University)

A Dynamic Memory Graph of Multi-Level Feature Representation Approach in Traffic Prediction

Scheduled for presentation during the Invited Session "S23a-Trustworthy AI for Traffic Sensing and Control" (TH-LM-T23), Thursday, November 20, 2025, 11:50−12:10, Coolangata 2

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords AI, Machine Learning for Real-time Traffic Flow Prediction and Management, Model-based Validation of Traffic Flow Prediction Algorithms

Abstract

With increasingly complicated urban transportation networks, precise traffic flow prediction provides a solid foundation for intelligent traffic management. This paper introduces a Hybrid Graph Memory Network (HGMN), a novel approach designed to model and forecast complex spatio-temporal dependencies in traffic dynamics. We employ one-dimensional convolutional layers to extract hierarchical features from inputs, facilitating structured representations across various extraction levels through pooling layers. Then, we introduce a graph memory network, which incorporates the decomposed features into a dynamic memory graph. This approach integrates adjacency matrices with time-varying features to capture the spatio-temporal dependencies inherent within the data. Finally, an upsampling mechanism fuses features from different hierarchical levels, ensuring the effective integration of detailed information and promoting the effective memory retention and prediction of short- and long-term traffic dynamics. Experiments across four distinct datasets are conducted to validate the performance of HGMN, alongside a sensitivity analysis to examine the impact of different components. The findings indicate that, compared to the benchmark, HGMN achieves a significant reduction in model sizes by 46.1%, leading to decreased memory usage, while still maintaining an accuracy improvement by 5.7%.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:52:18 PST  Terms of use