ITSC 2025 Paper Abstract

Close

Paper WE-EA-T7.5

Chen, Jingda (Shanghai Jiao Tong University), Zhuang, Hanyang (Shanghai Jiao Tong University), Wang, Chunxiang (Shanghai Jiao Tong University), Yang, Ming (Shanghai Jiao Tong University)

View-Aware High-Precision Vehicle Tracking Using Roadside RGB-D Camera Network

Scheduled for presentation during the Regular Session "S07b-Smart Infrastructure and Data-Driven Sensing for Intelligent Mobility" (WE-EA-T7), Wednesday, November 19, 2025, 14:50−14:50, Coolangata 1

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 19, 2025

Keywords IoT-based Traffic Sensors and Real-time Data Processing Systems, IoT for ITS Infrastructure: Smart Traffic Lights, Sensors, and Actuators, Cloud and Edge Computing Integration in ITS for Real-time Traffic Data Processing

Abstract

The infrastructure-based perception system utilizing a distributed RGB-D camera network effectively addresses challenges related to occlusion and limited fields of view. This innovative approach has been validated to offer high-precision vehicle tracking using model-registration-based methods. However, as the vehicle moves across different sensor units, fragmented and incomplete point clouds are generated, negatively impacting tracking accuracy. To overcome this issue, this paper introduces a view-aware, high-precision vehicle tracking framework for infrastructure-based RGB-D camera network. The primary contribution is the development of an adaptive model crop algorithm that takes into account the perspectives of roadside RGB-D cameras to eliminate misalignment between scan and model. Subsequently, the cropped model and scan data undergo point‐to‐model registration, followed by an Extended Kalman Filter employing a constant turn rate and velocity motion model. Our method significantly enhances model–observation overlap, reduces registration failures in highly fragmented point clouds, and provides more robust pose estimates. Furthermore, we conducted experiments in practical scenarios to verify the efficacy of our approach.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-19  16:59:57 PST  Terms of use