ITSC 2025 Paper Abstract

Close

Paper WE-EA-T13.5

Elgendy, Ahmed (Queen's University), Mounier, Eslam (Queen's University), Elghamrawy, Haidy (Royal Military College of Canada), Noureldin, Aboelmagd (Royal Military College of Canada)

Vision-Aided Semantic Filtering for Enhancing LiDAR Odometry in Dynamic Urban Environments

Scheduled for presentation during the Regular Session "S13b-Localization, Mapping, and Sensing for Robust Navigation in ITS" (WE-EA-T13), Wednesday, November 19, 2025, 14:50−14:50, Stradbroke

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 19, 2025

Keywords Sensor Integration and Calibration for Accurate Localization in Dynamic Road Conditions, Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles, Verification of Autonomous Vehicle Sensor Systems in Real-world Scenarios

Abstract

Light detection and ranging (LiDAR) sensors provide highly detailed geometric measurements, enabling accurate motion estimation through LiDAR odometry (LO). However, in dense urban environments, transient objects such as vehicles and pedestrians can introduce inconsistencies between consecutive scans, leading to pose estimation errors. This paper presents a vision-aided method that improves LO by identifying and removing transient points from LiDAR scans using semantic scene understanding. The proposed method employs an early fusion approach, leveraging monocular camera semantic segmentation masks, preprocessed and clustered solid-state LiDAR point clouds, and an effective data association strategy. Designed as a modular component, the proposed method is intended for seamless integration into LO, simultaneous localization and mapping (SLAM), or other pose estimation pipelines to enhance robustness in dynamic urban environments. Evaluations of real-world urban scenarios demonstrate consistent improvements in pose estimation accuracy across a range of dynamic conditions.On average, the method reduces horizontal position error by28%, decreases heading error by 34%, and increases sub-meter and lane-level accuracies by 13.3% and 11.3%, respectively.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-19  17:08:36 PST  Terms of use