ITSC 2024 Paper Abstract

Close

Paper FrAT13.3

Liu, Mingyu (Technical University of Munich), Yurtsever, Ekim (The Ohio State University), Brede, Marc (Technical University of Munich), Meng, Jun (Technical University of Munich), Zimmer, Walter (Technical University of Munich (TUM)), Xingcheng, Zhou (Technical University of Munich), Zagar, Bare Luka (Technical University of Munich (TUM). Chair of Robotics, Artific), Cui, Yuning (Technical University of Munich), Knoll, Alois (Technische Universität München)

GraphRelate3D: Context-Dependent 3D Object Detection with Inter-Object Relationship Graphs

Scheduled for presentation during the Poster Session "3D Object Detection" (FrAT13), Friday, September 27, 2024, 10:30−12:30, Foyer

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on December 26, 2024

Keywords Sensing, Vision, and Perception, Sensing and Intervening, Detectors and Actuators

Abstract

Accurate and effective 3D object detection is critical for ensuring the driving safety of autonomous vehicles. Recently, state-of-the-art two-stage 3D object detectors have exhibited promising performance. However, these methods refine proposals individually, ignoring the rich contextual information in the object relationships between the neighbor proposals. In this study, we introduce an object relation module, consisting of a graph generator and a graph neural network (GNN), to learn the spatial information from certain patterns to improve 3D object detection. Specifically, we create an inter-object relationship graph based on proposals in a frame via the graph generator to connect each proposal with its neighbor proposals. Afterward, the GNN module extracts edge features from the generated graph and iteratively refines proposal features with the captured edge features. Ultimately, we leverage the refined features as input to the detection head to obtain detection results. Our approach improves upon the baseline PV-RCNN on the KITTI validation set for the car class across easy, moderate, and hard difficulty levels by 0.82%, 0.74%, and 0.58%, respectively. Additionally, our method outperforms the baseline by more than 1% under the moderate and hard levels BEV AP on the test server.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-12-26  17:06:31 PST  Terms of use