ITSC 2025 Paper Abstract

Close

Paper FR-EA-T44.5

Pandian, Ashish (UC Berkeley), Dara, Ashwin (University of Calilfornia, Berkeley), Bindel, Adrien (Ecole polytechnique,), Lichtlé, Nathan (UC Berkeley), Singh, Prasen Jit (Intelmatix,Massachusetts Institute of Technology), Alzamzami, Fatimah (Prince Sultan University), Othman, Esam (Prince Sultan University, Riyadh, Saudi Arabia), Almatrudi, Sulaiman (UNIVERSITY OF CALIFORNIA, BERKELEY), Lee, Jonathan (University of California, Berkeley), Bayen, Alexandre (University of California, Berkeley)

Verifiable Language Model Explanations for Deep RL-Based Flow Smoothing

Scheduled for presentation during the Regular Session "S44b-Human Factors and Human Machine Interaction in Automated Driving" (FR-EA-T44), Friday, November 21, 2025, 14:50−14:50, Currumbin

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Trust, Acceptance, and Public Perception of Autonomous Transportation Technologies, Real-time Motion Planning and Control for Autonomous Vehicles in ITS Networks, AI, Machine Learning for Real-time Traffic Flow Prediction and Management

Abstract

We introduce CLEAR (Contextual Language Explanations for Actions from RL), a novel framework for generating step-by-step natural language explanations of reinforcement learning (RL) traffic controller decisions. To mitigate the risk of large language model (LLM) hallucinations in safety-critical applications, CLEAR incorporates a sequence of validation steps: cross-checking explanations against policy outputs for accuracy, simulating environment perturbations for grounding, and verifying logical consistency to ensure safe and faithful explanations. Unlike static supervised fine-tuning approaches that memorize explanations for observed state-action pairs, CLEAR continuously integrates new experiences through online learning, enabling rapid adaptation to novel traffic scenarios. Evaluated on experimental data from the VanderTest, a field deployment of autonomous vehicles on I-24 for traffic smoothing, CLEAR produces high-fidelity, context-aware explanations that significantly improve interpretability without sacrificing control performance. CLEAR outperforms few-shot prompting by 57% and retrieval-based multi-agent workflows by 42% in predicting RL controller decisions. This work addresses a core challenge in deploying RL for mixed-autonomy traffic: the lack of transparency that leads even trained operators to disengage otherwise effective RL controllers. By integrating explanation and validation directly into the control loop, CLEAR helps bridge the gap between policy performance and human trust. Code is available at https://github.com/clear-reasoning/CLEAR.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:27:53 PST  Terms of use