Paper FrBT5.1
Arefeen, Md Adnan (NEC Labs America), Debnath, Biplob (NEC Labs America), Chakradhar, Srimat (NEC Labs America, Princeton, NJ)
TrafficLens: Multi-Camera Traffic Video Analysis Using LLMs
Scheduled for presentation during the Regular Session "Sensing, Vision, and Perception VI" (FrBT5), Friday, September 27, 2024,
13:30−13:50, Salon 13
2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada
This information is tentative and subject to change. Compiled on December 26, 2024
|
|
Keywords Multi-modal ITS, Sensing, Vision, and Perception, Road Traffic Control
Abstract
Traffic cameras are essential in urban areas, playing a crucial role in intelligent transportation systems. Multiple cameras at intersections enhance law enforcement capabilities, traffic management, and pedestrian safety. However, efficiently managing and analyzing multi-camera feeds poses challenges due to the vast amount of data. Analyzing such huge video data requires advanced analytical tools. While Large Language Models (LLMs) like ChatGPT, equipped with retrieval-augmented generation (RAG) systems, excel in text-based tasks, integrating them into traffic video analysis demands converting video data into text using a Vision-Language Model (VLM), which is time-consuming and delays the timely utilization of traffic videos for generating insights and investigating incidents. To address these challenges, we propose TrafficLens, a tailored algorithm for multi-camera traffic intersections. TrafficLens employs a sequential approach, utilizing overlapping coverage areas of cameras. It iteratively applies VLMs with varying token limits, using previous outputs as prompts for subsequent cameras, enabling rapid generation of detailed textual descriptions while reducing processing time. Additionally, TrafficLens intelligently bypasses redundant VLM invocations through an object-level similarity detector. Experimental results with real-world datasets demonstrate that TrafficLens reduces processing time by up to 4x while maintaining information accuracy.
|
|