ITSC 2025 Paper Abstract

Close

Paper TH-EA-T20.5

Roy, Parthib (University of California, Merced), Perisetla, Srinivasa (University of California, Merced), Shriram, Shashank (University of California, Merced), Krishnaswamy, Harsha (University of California, Merced), Keskar, Aryan (University of California, Merced), Greer, Ross (University of California, San Diego)

DoScenes: An Autonomous Driving Dataset with Natural Language Instruction for Human Interaction and Vision-Language Navigation

Scheduled for presentation during the Invited Session "S20b-Foundation Model-Enabled Scene Understanding, Reasoning, and Decision-Making for Autonomous Driving and ITS" (TH-EA-T20), Thursday, November 20, 2025, 14:50−14:50, Surfers Paradise 2

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Human-Machine Interaction Systems for Enhanced Driver Assistance and Safety, Trust, Acceptance, and Public Perception of Autonomous Transportation Technologies, Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles

Abstract

Human-interactive robotic systems, particularly autonomous vehicles (AVs), must effectively integrate human instructions into their motion planning. This paper introduces doScenes, a novel dataset designed to facilitate research on human-vehicle instruction interactions, focusing on short-term directives that directly influence vehicle motion. By annotating multimodal sensor data with natural language instructions and referentiality tags, doScenes bridges the gap between instruction and driving response, enabling context-aware and adaptive planning. Unlike existing datasets that focus on ranking or scene-level reasoning, doScenes emphasizes actionable directives tied to static and dynamic scene objects. This framework addresses limitations in prior research, such as reliance on simulated data or predefined action sets, by supporting nuanced and flexible responses in real-world scenarios. This work lays the foundation for developing learning strategies that seamlessly integrate human instructions into autonomous systems, advancing safe and effective human-vehicle collaboration. We make our data publicly available at https://www.github.com/rossgreer/doScenes

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:45:07 PST  Terms of use