ITSC 2025 Paper Abstract

Close

Paper WE-EA-T9.4

Liu, Haichao (The Hong Kong University of Science and Technology (Guangzhou)), Liu, Hongji (The Hong Kong University of Science and Technology), Ma, Jun (The Hong Kong University of Science and Technology (Guangzhou))

An Autonomous Mobility on Demand Platform Supporting Cooperative Driving with Visual Context Generation for LMMs

Scheduled for presentation during the Regular Session "S09b-Optimization for Multimodal and On-Demand Urban Mobility Systems" (WE-EA-T9), Wednesday, November 19, 2025, 14:30−14:50, Coolangata 3

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 19, 2025

Keywords Multimodal Transportation Networks for Efficient Urban Mobility, Multi-vehicle Coordination for Autonomous Fleets in Urban Environments, Demand-Responsive Transit Systems for Smart Cities

Abstract

As Connected Autonomous Vehicles (CAVs) emerge as a cornerstone of future transportation systems, Autonomous Mobility on Demand (AMoD) platforms stand to revolutionize urban mobility by mitigating traffic congestion and enhancing passenger comfort and convenience. However, existing research primarily addresses either vehicle dispatch using real-world datasets to estimate the time cost for each CAV to its destination, or emphasizes cooperative motion planning of the CAVs for obstacle avoidance. Moreover, these studies often overlook the integration of the following two aspects: CAV destination assignments and cooperative motion planning in AMoD systems. To bridge this gap, our proposed CoDriveVis offers an integrated platform that integrates microscopic motion planning and control with macroscopic vehicle scheduling and dispatching. CoDriveVis leverages CARLA's perception capabilities to produce multimodal outputs such as LiDAR-generated point clouds, images, and videos from diverse perspectives, which can inform traffic system operations, especially when coupled with large multimodal models (LMMs). Additionally, it features the Bird's Eye View (BEV) image generation to visualize vehicle dispatching and request distribution across the map. The scheduling integration of the platform is streamlined through organized dictionaries for CAVs and passenger requests, while its control modes support both position transition and throttle/steering inputs. CoDriveVis also facilitates destination changes for updating the coordination of CAVs. Experimental deployment of representative methods within CoDriveVis demonstrates its efficacy and practicality as a robust simulation platform for advancing AMoD systems. The code is available at https://github.com/henryhcliu/CoDriveVis.git.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-19  17:04:27 PST  Terms of use