ITSC 2025 Paper Abstract

Close

Paper TH-LM-T20.2

xie, xiaolong (tongji university), ZHU, Hong (Tongji University), TANG, Keshuang (Tongji University), SUN, FENGMEI (Tongji university)

LLM Distilled Spatio-Temporal Prompt Network for Traffic Forecasting

Scheduled for presentation during the Invited Session "S20a-Foundation Model-Enabled Scene Understanding, Reasoning, and Decision-Making for Autonomous Driving and ITS" (TH-LM-T20), Thursday, November 20, 2025, 10:50−11:10, Surfers Paradise 2

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords AI, Machine Learning for Real-time Traffic Flow Prediction and Management, AI, Machine Learning Techniques for Traffic Demand Forecasting

Abstract

Accurate traffic flow prediction is vital for intelligent transportation systems to reduce congestion and optimize resources. However, heterogeneous traffic patterns across cities and limited labeled data in target domains cause distribution shifts, posing a practical challenge for model generalization and making retraining on each new distribution prohibitively costly. To address this challenge, we propose the Spatio-Temporal Prompt Network (STPN), a lightweight student network distilled from a Large Language Model (LLM), which substantially reduces the fine-tuning cost for downstream tasks. The proposed network leverages LLM-derived knowledge priors to facilitate prompt learning and improve cross-domain adaptability. In addition, during the distillation process, a synchronous fine-tuning strategy is introduced to jointly optimize the partially frozen teacher model and the fully trainable student model, further improving model performance and training efficiency. Extensive experiments on various city datasets demonstrate STPN's effectiveness. For PEMS07(M), MAE improves by 1.49% and RMSE by 1.31%. For ChengDu-DIDI, MAE improves by 0.85% and RMSE by 0.84%, showcasing superior cross-domain generalization under distribution shifts.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:30:17 PST  Terms of use