ITSC 2024 Paper Abstract

Close

Paper ThBT17.2

Xu, Fengyu (Zhejiang Lab), Xiao, Yongxiong (Zhejiang Lab), Mei, Jilin (Institute of Computing Technology, Chinese Academy of Sciences), Hu, Yu (Institute of Computing Technology, Chinese Academy of Sciences), Fu, Qiang (Zhejiang Laboratory)

Domain-Adaptive Point Cloud Semantic Segmentation from Urban to Off-Road Scenes Based on Knowledge-Augmented Deep Learning

Scheduled for presentation during the Poster Session "Perception - Semantic segmentation" (ThBT17), Thursday, September 26, 2024, 14:30−16:30, Foyer

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 14, 2024

Keywords Sensing, Vision, and Perception, Automated Vehicle Operation, Motion Planning, Navigation, Aerial, Marine and Surface Intelligent Vehicles

Abstract

Domain-adaptive point cloud semantic segmentation (PCSS) is crucial for high-level autonomous driving. However, supervised deep learning methods are often constrained by training data and suffer from poor generalization in unknown environments. To address these challenges, we propose a domain-adaptive PCSS approach leveraging knowledge-augmented deep learning (KADL). Specifically, we introduce three strategies: (1) point cloud data augmentation based on the compact bird's-eye view (CBEV) map, which is a novel point cloud organization method; (2) implicit knowledge augmentation based on knowledge distillation; (3) explicit knowledge augmentation based on attribution analysis and network modulation. For experimental validation, we utilize two distinct datasets, namely the urban dataset SemanticKITTI and the off-road dataset RELLIS-3D, which are involved in the training and testing phases, respectively. Additionally, we have added road labels to the RELLIS-3D dataset, which originally lacked a road category. To our knowledge, this work is the first to investigate domain-adaptive PCSS from urban to off-road scenes. The experimental results demonstrate that our method is effective and has promising performance. The code and data are available at https://github.com/xfy0032/kadlpcss.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-14  00:53:54 PST  Terms of use