ITSC 2025 Paper Abstract

Close

Paper TH-EA-T22.4

LIU, YONGHUI (Korea Advanced Institute of Science and Technology), Kim, Inhi (Korea Advanced Institute of Science and Technology)

Cross-Taxonomy Label Alignment in Transportation Networks with Prompt-Driven Large Language Models

Scheduled for presentation during the Invited Session "S22b-Emerging Trends in AV Research" (TH-EA-T22), Thursday, November 20, 2025, 14:30−14:50, Coolangata 1

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Multimodal Transportation Networks for Efficient Urban Mobility, Transportation Optimization Techniques and Multi-modal Urban Mobility, AI, Machine Learning and Predictive Analytics for Traffic Incident Detection and Management

Abstract

Integrating transportation datasets from diverse sources is critical for comprehensive analysis and intelligent infrastructure planning. A major obstacle to effective integration lies in taxonomy heterogeneity, where datasets differ in points of interest (POI) categories, road type classifications, and land use taxonomies. Addressing these inconsistencies requires cross-taxonomy label alignment, a challenge that remains inadequately explored due to the reliance on costly human annotations and the limited effectiveness of rule-based matching approaches. This paper presents a novel framework that combines prompt-driven pseudo-labeling and multi-view representation learning to address cross-taxonomy label alignment. Large language model (LLM) prompts generate initial semantic pseudo-labels to guide pretraining, while multi-view input representations are constructed by combining DeepWalk-based structural embeddings with encoded source taxonomy labels. The framework is first trained in a semi-supervised manner using pseudo-labels, followed by fine-tuning with a limited amount of human-verified labels to further refine alignment quality. Extensive experiments demonstrate that the proposed method substantially outperforms rule-based and prompt-only baselines, maintaining robust performance even when only 1% of labeled samples are available. These results demonstrate that leveraging LLM-guided semantic supervision alongside structural representations enables robust and scalable cross-taxonomy adaptation, even under severely limited supervision.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:37:48 PST  Terms of use