ITSC 2025 Paper Abstract

Close

Paper FR-LA-T42.1

Zhao, Minglu (Tokyo Institute of Technology), Shimosaka, Masamichi (Tokyo Institute of Technology)

Continuous Inverse Reinforcement Learning with State-Wise Safety Constraints for Stable Driving Behavior Prediction

Scheduled for presentation during the Regular Session "S42c-Safety and Risk Assessment for Autonomous Driving Systems" (FR-LA-T42), Friday, November 21, 2025, 16:00−16:20, Broadbeach 3

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Autonomous Vehicle Safety and Performance Testing, Real-time Motion Planning and Control for Autonomous Vehicles in ITS Networks, AI, Machine Learning and Predictive Analytics for Traffic Incident Detection and Management

Abstract

Inverse reinforcement learning (IRL) is a promising approach for modeling human driving behaviors by learning underlying reward functions from expert demonstrations. While recent studies have incorporated failed demonstrations to improve learning robustness, most existing methods enforce safety constraints only at the trajectory level, which is insufficient for real-world autonomous driving scenarios requiring per-state safety. This paper proposes a novel IRL framework that introduces state-wise safety constraints via a behavior discriminator, which generates safety labels for each state based on environmental context. By integrating the discriminator into the main reward optimization loop, the proposed method avoids additional computational complexity while ensuring safety at every decision point. Experimental results in the CARLA simulator across multiple driving scenarios demonstrate improved performance in both behavior imitation and driving task requirements. The results confirm that enforcing state-wise safety significantly enhances stability and reliability in driving behavior prediction in static contextual environments, providing a viable direction for safer autonomous decision-making.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:25:24 PST  Terms of use