ITSC 2024 Paper Abstract

Close

Paper WeBT2.2

Lerch, David (Fraunhofer IOSB), El Bachiri, Yasser (Deggendorf Institute of Technology), Martin, Manuel (Fraunhofer IOSB), Diederichs, Frederik (Fraunhofer IOSB), Stiefelhagen, Rainer (Karlsruhe Institute of Technology)

3D Skeleton-Based Driver Activity Recognition Using Self-Supervised Learning

Scheduled for presentation during the Regular Session "Sensing, Vision, and Perception II" (WeBT2), Wednesday, September 25, 2024, 14:50−15:10, Salon 5

2024 IEEE 27th International Conference on Intelligent Transportation Systems (ITSC), September 24- 27, 2024, Edmonton, Canada

This information is tentative and subject to change. Compiled on December 26, 2024

Keywords Sensing, Vision, and Perception, Driver Assistance Systems

Abstract

Amidst the increasing integration of technology in car interiors, the risk of driver distraction has become a critical concern for automotive safety. Addressing this issue requires robust methods for detecting driver distractions, often relying on intricate models trained on vast amounts of labeled data. However, obtaining such labeled data can be expensive and time-consuming. In this context, self-supervised learning emerges as a promising approach, leveraging unlabeled data to learn meaningful representations and reduce dependency on annotated datasets. In this study, we explore self-supervised learning methods for 3D skeleton-based driver activity recognition. We evaluate the performance of our proposed SkelDINO-SAM method across diverse backbone architectures. Utilizing the Drive&Act dataset, characterized by its long-tailed distribution of activity classes, we evaluate the effectiveness of our approach in addressing challenges associated with real-world scenarios. Our findings highlight the superiority of transformer-based backbones, particularly when combined with our SkelDINOSAM approach. Through extensive experiments and ablation studies, we demonstrate the efficacy of our method in enhancing driver secondary task recognition accuracy. Overall, our results show the efficiency of our approach outperforming the state-of-the-art method by 11.29%. Our code is publicly available on GitHub.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2024 PaperCept, Inc.
Page generated 2024-12-26  12:38:42 PST  Terms of use