ITSC 2024 Paper Abstract

Close

Paper ThAT17.6

Zheng, Linwei (The Hong Kong University of Science and Technology, HKUST), Hu, Xiangcheng (HKUST), Ma, Fulong (Hong Kong University of Science and Technology), ZHAO, Guoyang (HKUST(GZ)), Qi, Weiqing (The Hong Kong University of Science and Technology (GuangZhou)), Ma, Jun (The Hong Kong University of Science and Technology (Guangzhou)), Liu, Ming (HKUST)

A Translation-Tolerant Place Recognition Method by Viewpoint Unification

Scheduled for presentation during the Poster Session "Accurate Positioning and Localization" (ThAT17), Thursday, September 26, 2024, 10:30−12:30, Foyer

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 3, 2024

Keywords Sensing, Vision, and Perception, Accurate Global Positioning, Driver Assistance Systems

Abstract

Place recognition serves as a fundamental component in tasks like loop closure detection and relocalization for mobile robots. Polar coordinate representations, such as Scan Context, which align with the data structure of range sensors, have become the most common data structure for point cloud descriptors in place recognition. While polar representations demonstrate the rotation invariance, they remain susceptible to translation variations. In this study, we introduce a novel approach: shifting the viewpoint of the original point cloud to construct the unified Scan Context, thereby mitigating translation variance. Our key concept is focused on identifying a stable, unified viewpoint for a given place and then pre-translating the point cloud accordingly. This naturally results in a descriptor devoid of translation variance. Importantly, within a given place, the viewpoint unification process tends to relocate the viewpoint to a similar position, irrespective of the original sensor perspective. In other words, the unified Scan Context becomes more closely associated with the place's structural characteristics than the physical location of the sensor. We validate our method through a comprehensive series of experiments encompassing synthetic scenarios and real-world datasets, showcasing its robustness in effectively handling translation variations.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-03  03:57:12 PST  Terms of use