ITSC 2024 Paper Abstract

Close

Paper FrBT6.6

Thilakanayake, Thakshila (Memorial University of Newfoundland), De Silva, Oscar (Memorial University of Newfoundland), Wanasinghe, Thumeera R. (Memorial University of Newfoundland), Mann, George Kingsly (Memorial University of Newfoundland), Jayasiri, Awantha (National Research Council of Canada)

A Generative Adversarial Network-Based Method for LiDAR-Assisted Radar Image Enhancement

Scheduled for presentation during the Regular Session "LiDAR-based perception" (FrBT6), Friday, September 27, 2024, 15:10−15:30, Salon 14

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on December 26, 2024

Keywords Sensing, Vision, and Perception, Automated Vehicle Operation, Motion Planning, Navigation, Other Theories, Applications, and Technologies

Abstract

This paper presents a generative adversarial network (GAN) based approach for radar image enhancement. Although radar sensors remain robust for operations under adverse weather conditions, their application in autonomous vehicles (AVs) is commonly limited by the low-resolution data they produce. The primary goal of this study is to enhance the radar images to better depict the details and features of the environment, thereby facilitating more accurate object identification in AVs. The proposed method utilizes high-resolution, two-dimensional (2D) projected light detection and ranging (LiDAR) point clouds as ground truth images and low-resolution radar images as inputs to train the GAN. The ground truth images were obtained through two main steps. First, a LiDAR point cloud map was generated by accumulating raw LiDAR scans. Then, a customized LiDAR point cloud cropping and projection method was employed to obtain 2D projected LiDAR point clouds. The inference process of the proposed method relies solely on radar images to generate an enhanced version of them. The effectiveness of the proposed method is demonstrated through both qualitative and quantitative results. These results show that the proposed method can generate enhanced images with clearer object representation compared to the input radar images, even under adverse weather conditions.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-12-26  16:35:18 PST  Terms of use