ITSC 2025 Paper Abstract

Close

Paper WE-EA-T7.6

Schwarzer, Leon (TU Dortmund), Zeller, Matthias (CARIAD SE), Casado Herraez, Daniel (CARIAD & University of Bonn), Dierl, Simon (TU Dortmund University), Heidingsfeld, Michael (CARIAD SE), Stachniss, Cyrill (Bonn University)

Self-Supervised Moving Object Segmentation of Sparse and Noisy Radar Point Clouds

Scheduled for presentation during the Regular Session "S07b-Smart Infrastructure and Data-Driven Sensing for Intelligent Mobility" (WE-EA-T7), Wednesday, November 19, 2025, 14:50−15:30, Coolangata 1

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 19, 2025

Keywords Deep Learning for Scene Understanding and Semantic Segmentation in Autonomous Vehicles

Abstract

Moving object segmentation is a crucial task for safe and reliable autonomous mobile systems like self-driving cars, improving the reliability and robustness of subsequent tasks like SLAM or path planning. While the segmentation of camera or LiDAR data is widely researched and achieves great results, it often introduces an increased latency by requiring the accumulation of temporal sequences to gain the necessary temporal context. Radar sensors overcome this problem with their ability to provide a direct measurement of a point's Doppler velocity, which can be exploited for single-scan moving object segmentation. However, radar point clouds are often sparse and noisy, making data annotation for use in supervised learning very tedious, time-consuming, and cost-intensive. To overcome this problem, we address the task of self-supervised moving object segmentation of sparse and noisy radar point clouds. We follow a two-step approach of contrastive self-supervised representation learning with subsequent supervised fine-tuning using limited amounts of annotated data. We propose a novel clustering-based contrastive loss function with cluster refinement based on dynamic points removal to pretrain the network to produce motion-aware representations of the radar data. Our method improves label efficiency after fine-tuning, effectively boosting state-of-the-art performance by self-supervised pretraining.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-19  16:56:47 PST  Terms of use