ITSC 2024 Paper Abstract

Close

Paper WeAT8.2

Sharma, Devansh (Bowling Green State University), Hade, Tihitina (Bowling Green State University), Tian, Qing (University of Alabama at Birmingham & BGSU)

Comparison of Deep Object Detectors on a New Vulnerable Pedestrian Dataset

Scheduled for presentation during the Regular Session "Modeling, Simulation, and Control of Pedestrians and Cyclists I" (WeAT8), Wednesday, September 25, 2024, 10:50−11:10, Salon 16

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on October 8, 2024

Keywords Modeling, Simulation, and Control of Pedestrians and Cyclists, Sensing, Vision, and Perception

Abstract

Pedestrian safety is one primary concern in autonomous driving. The under-representation of vulnerable groups in today's pedestrian datasets points to an urgent need for a dataset of vulnerable road users. To help train well-rounded self-driving visual detectors and subsequently drive research to improve the accuracy of vulnerable pedestrian detection, we first introduce a new dataset in this paper: the Bowling Green Vulnerable Pedestrian (BGVP) dataset. The dataset includes four classes, i.e., Children without Disability, Elderly without Disability, With Disability, and Non-Vulnerable. This dataset consists of images collected from the public domain and manually-annotated bounding boxes. In addition, on the proposed dataset, we have trained and tested five classic or state-of-the-art object detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster R-CNN, and EfficientDet. Our results indicate that YOLOX and YOLOv4 perform the best on our dataset, with YOLOv4 scoring 0.7999 and YOLOX scoring 0.7779 on the mAP 0.5 metric, while YOLOX outperforms YOLOv4 by 3.8% on the mAP 0.5:0.95 metric. Overall, all five detectors do well in predicting the With Disability class and perform poorly in the Elderly without Disability class. YOLOX consistently outperforms all other detectors on the mAP 0.5:0.95 per class metric, obtaining 0.5644, 0.5242, 0.4781, and 0.6796 for the Children without Disability, Elderly without Disability, Non-vulnerable, and With Disability categories, respectively. Our dataset and codes are available at https://github.com/devvansh1997/BGVP.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-10-08  15:21:54 PST  Terms of use