ITSC 2025 Paper Abstract

Close

Paper FR-LM-T37.5

Li, Dongyue (Nagoya University), Chen, Jialei (Nagoya University), Ito, Seigo (TOYOTA CENTRAL R&D LABS., INC), Murase, Hiroshi (Nagoya University), Deguchi, Daisuke (Nagoya University)

CQVPR: Landmark-Aware Contextual Queries for Visual Place Recognition

Scheduled for presentation during the Regular Session "S37a-Reliable Perception and Robust Sensing for Intelligent Vehicles" (FR-LM-T37), Friday, November 21, 2025, 11:50−12:10, Coolangata 1

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles

Abstract

Visual place recognition remains challenging due to significant variations in appearance caused by lighting, viewpoint, and structural similarities across environments. To address this, we propose Contextual Query VPR (CQVPR), a novel method that bridges the gap between pixel-level and segment-level representations. Unlike conventional approaches that either rely on low-level appearance cues or high-level semantic partitions, CQVPR integrates fine-grained visual details with global contextual understanding through a set of learnable queries. These contextual queries capture high-level semantic structures within the scene, which are fused with dense pixel-wise features to form robust descriptors for retrieval. To encourage discriminative query learning, we introduce a query matching loss that promotes similarity among queries from the same location while pushing those from different locations apart. Extensive experiments on several datasets demonstrate that the proposed method outperforms other state-of-the-art methods, especially in challenging scenarios.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:20:16 PST  Terms of use