Paper WeAT13.6
Sural, Shounak (Carnegie Mellon University), Naren, Naren (Carnegie Mellon University), Rajkumar, Ragunathan (Carnegie Mellon University)
ContextVLM: Zero-Shot and Few-Shot Context Understanding for Autonomous Driving Using Vision Language Models
Scheduled for presentation during the Poster Session "Large Language Models" (WeAT13), Wednesday, September 25, 2024,
10:30−12:30, Foyer
2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada
This information is tentative and subject to change. Compiled on October 7, 2024
|
|
Keywords Sensing, Vision, and Perception, Multi-modal ITS, Multi-autonomous Vehicle Studies, Models, Techniques and Simulations
Abstract
In recent years, there has been a notable increase in the development of autonomous vehicle (AV) technologies aimed at improving safety in transportation systems. While AVs have been deployed in the real-world to some extent, a full-scale deployment requires AVs to robustly navigate through challenges like heavy rain, snow, low lighting, construction zones and GPS signal loss in tunnels. To be able to handle these specific challenges, an AV must reliably recognize the physical attributes of the environment in which it operates. In this paper, we define context recognition as the task of accurately identifying environmental attributes for an AV to appropriately deal with them. Specifically, we define 24 environmental contexts capturing a variety of weather, lighting, traffic and road conditions that an AV must be aware of. Motivated by the need to recognize environmental contexts, we create a context recognition dataset called DrivingContexts with more than 1.6 million context-query pairs relevant for an AV. Since traditional supervised computer vision approaches do not scale well to a variety of contexts, we propose a framework called ContextVLM that uses vision-language models to detect contexts using zero- and few-shot approaches. ContextVLM is capable of reliably detecting relevant driving contexts with an accuracy of more than 95% on our dataset, while running in real-time on an Nvidia GeForce GTX 1050 Ti GPU on an AV with a latency of 10.5 ms per query.
|
|