ITSC 2025 Paper Abstract

Close

Paper TH-EA-T28.1

Fritz, Daniel (University of Applied Sciences, Esslingen), Lagamtzis, Dimitrios (Mercedes-Benz AG), Mink, Michael (Mercedes-Benz AG), Schober, Steffen (University of Applied Sciences, Esslingen)

ADVNTG: Autonomous Driving Vehicle and Neural Transformer-Based HD Map Generation Using Crowd-Sourced Fleet Data

Scheduled for presentation during the Regular Session "S28b-Multi-Sensor Fusion and Perception for Robust Autonomous Driving" (TH-EA-T28), Thursday, November 20, 2025, 13:30−13:50, Stradbroke

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Advanced Sensor Fusion for Robust Autonomous Vehicle Perception, Multi-vehicle Coordination for Autonomous Fleets in Urban Environments, Autonomous Vehicle Safety and Performance Testing

Abstract

The field of autonomous driving (AD) has undergone continuous advancement, driven by both academic research and technological implementation. Onboard sensors are utilized to generate a representation of the vehicle's immediate surroundings, enhancing the understanding and awareness within the current driving scenario. Furthermore, high-definition (HD) maps facilitate this task by providing a strong prior, since they are not limited by range or adverse weather conditions. However, the continuous maintenance of such maps raises challenges, often due to frequent road constructions, making the manual generation of HD maps a laborious task. In this work, we aim to generate an offline HD map from crowd-sourced vehicle fleet data. First, we reduce the dataset by compressing lane-level features in order to eliminate redundant points. Subsequently, we perform an aggregation step by clustering similar lane-level features. The reduced dataset is then used to train a neural network. We investigate and compare the performance of two distinct model architectures. For the Transformer model, we investigate the impact of different positional encoding strategies on evaluation metrics. For the graph neural network (GNN) model, we compare the performance of various GNN layers. Additionally, for both model types, we explore the effects of different loss functions and input feature vectors.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:41:10 PST  Terms of use