ITSC 2024 Paper Abstract

Close

Paper ThAT5.1

Musiat, Alexander (University of Applied Sciences Mannheim), Reichardt, Laurenz (HS Mannheim), Schulze, Michael (Mannheim University of Applied Sciences), Wasenmüller, Oliver (HS Mannheim)

RadarPillars: Efficient Object Detection from 4D Radar Point Clouds

Scheduled for presentation during the Regular Session "Sensing, Vision, and Perception III" (ThAT5), Thursday, September 26, 2024, 10:30−10:50, Salon 13

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on December 26, 2024

Keywords Sensing, Vision, and Perception

Abstract

Automotive radar systems have evolved to provide not only range, azimuth and Doppler velocity, but also elevation data. This additional dimension allows for the representation of 4D radar as a 3D point cloud. As a result, existing deep learning methods for 3D object detection, which were initially developed for LiDAR data, are often applied to these radar point clouds. However, this neglects the special characteristics of 4D radar data, such as the extreme sparsity and the optimal utilization of velocity information. To address these gaps in the state-of-the-art, we present RadarPillars, a pillar-based object detection network. By decomposing radial velocity data, introducing PillarAttention for efficient feature extraction, and studying layer scaling to accommodate radar sparsity, RadarPillars significantly outperform state-of-the-art detection results on the View-of-Delft dataset. Importantly, this comes at a significantly reduced parameter count, surpassing existing methods in terms of efficiency and enabling real-time performance on edge devices.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-12-26  06:40:40 PST  Terms of use