ITSC 2025 Paper Abstract

Close

Paper TH-LM-T20.6

li, lin (Nanyang Technology University), Cai, Yuxin (Nanyang Technological University), Fang, Jianwu (Xi’an Jiaotong University), Xue, Jianru (Xi'an Jiaotong University), Lv, Chen (Nanyang Technological University)

COVLM-RL: Critical Object-Oriented Reasoning for Autonomous Driving Using VLM-Guided Reinforcement Learning

Scheduled for presentation during the Invited Session "S20a-Foundation Model-Enabled Scene Understanding, Reasoning, and Decision-Making for Autonomous Driving and ITS" (TH-LM-T20), Thursday, November 20, 2025, 12:10−12:30, Surfers Paradise 2

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Autonomous Vehicle Safety and Performance Testing, Real-time Motion Planning and Control for Autonomous Vehicles in ITS Networks, Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles

Abstract

End-to-end autonomous driving frameworks face persistent challenges in generalization, training efficiency, and interpretability. While recent methods leverage Vision-Language Models (VLMs) through supervised learning on large-scale datasets to improve reasoning, they often lack robustness in novel scenarios. Conversely, reinforcement learning (RL)-based approaches enhance adaptability but remain data-inefficient and lack transparent decision-making. To address these limitations, we propose COVLM-RL, a novel end-to-end driving framework that integrates Critical Object-oriented (CO) reasoning with VLM-guided RL. Specifically, we design a Chain-of-Thought (CoT) prompting strategy that enables the VLM to reason over critical traffic elements and generate high-level semantic decisions, effectively transforming multi-view visual inputs into structured semantic decision priors. These priors reduce the input dimensionality and inject task-relevant knowledge into the RL loop, accelerating training and improving policy interpretability. However, bridging high-level semantic guidance with continuous low-level control remains non-trivial. To this end, we introduce a consistency loss that encourages alignment between the VLM’s semantic plans and the RL agent’s control outputs, enhancing interpretability and training stability. Experiments conducted in the CARLA simulator demonstrate that COVLM-RL significantly improves the success rate by 30% in trained driving environments and by 50% in previously unseen environments, highlighting its strong generalization capability.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:41:10 PST  Terms of use