ITSC 2025 Paper Abstract

Close

Paper FR-EA-T31.6

Liu, Tong (Tsinghua University), Wang, Yinuo (Tsinghua University), Song, Xujie (Tsinghua University), Zou, Wenjun (Tsinghua University), Chen, LiangFa (University of Science and Technology Beijing), Wang, Likun (Tsinghua University), Shuai, Bin (Tsinghua University), Duan, Jingliang (University of Science and Technology Beijing), Li, Shengbo Eben (Tsinghua University)

Distributional Soft Actor-Critic with Diffusion Policy

Scheduled for presentation during the Regular Session "S31b-AI-Driven Motion Prediction and Safe Control for Autonomous Systems" (FR-EA-T31), Friday, November 21, 2025, 14:50−15:30, Southport 1

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Real-time Motion Planning and Control for Autonomous Vehicles in ITS Networks, Energy-efficient Motion Control for Autonomous Vehicles, Smart Logistics with Real-time Traffic Data for Freight Routing and Optimization

Abstract

Reinforcement learning has been proven to be highly effective in handling complex control tasks. Traditional methods typically use unimodal distributions, such as Gaussian distributions, to model the output of value distributions. However, unimodal distribution often and easily causes bias in value function estimation, leading to poor algorithm performance. This paper proposes a distributional reinforcement learning algorithm called DSAC-D (Distributed Soft Actor Critic with Diffusion Policy) to address the challenges of estimating bias in value functions and obtaining multimodal policy representations. A multimodal distributional policy iteration framework that can converge to the optimal policy was established by introducing policy entropy and value distribution function. A diffusion value network that can accurately characterize the distribution of multi peaks was constructed by generating a set of reward samples through reverse sampling using a diffusion model. Based on this, a distributional reinforcement learning algorithm with dual diffusion of the value network and the policy network was derived. MuJoCo testing tasks demonstrate that the proposed algorithm not only learns multimodal policy, but also achieves state-of-the-art (SOTA) performance in all 9 control tasks, with significant suppression of estimation bias and total average return improvement of over 10% compared to existing mainstream algorithms. The results of real vehicle testing show that DSAC-D can accurately characterize the multimodal distribution of different driving styles, and the diffusion policy network can characterize multimodal trajectories.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:11:59 PST  Terms of use