ITSC 2025 Paper Abstract

Close

Paper TH-EA-T29.3

Ma, Yongqiang (Institute of Artificial Intelligence and Robotics (IAIR), School), Jing, Haodong (Xi'an Jiaotong University), Gao, Wenjie (Xi'an Jiaotong University), Hua, Haibo (XI'AN JIAOTONG UNIVERSITY), Zhang, Xuetao (Xi'an Jiaotong University)

MFE-Driver: Multimodal Fusion Network for EEG-Based Driver Emotion State Recognition

Scheduled for presentation during the Regular Session "S29b-Human Factors and Human Machine Interaction in Automated Driving" (TH-EA-T29), Thursday, November 20, 2025, 14:10−14:30, Currumbin

2025 IEEE 28th International Conference on Intelligent Transportation Systems (ITSC), November 18-21, 2025, Gold Coast, Australia

This information is tentative and subject to change. Compiled on October 18, 2025

Keywords Driver Behavior Monitoring and Feedback Systems for Semi-autonomous Vehicles, Human-Machine Interaction Systems for Enhanced Driver Assistance and Safety, User-Centric HMI Design for Autonomous Vehicle Control Systems

Abstract

In recent years, the impact of drivers' negative emotions on driving performance and the safety of intelligent transportation systems has been of increasing concern, the emotional fluctuations, especially negative emotion can influence driving behavior, reaction time, and decision-making. EEG signals are widely used for emotion analysis due to their high temporal resolution, but there are limited studies on EEG-based driver emotion recognition and behavior prediction, especially in real driving scenarios. Most of the existing studies either focus on a single modality or lack efficient fusion of heterogeneous multimodal information. To address these challenges, we propose MFE-Driver, a novel multimodal fusion network that integrates EEG signals with behavioral and expression information for comprehensive driver emotion classification and behavior prediction. MFE-Driver is designed with a feature processing module for extracting task-specific representations, a feature enhancement module for aligning multimodal features, and a fine-grained feature fusion module for effective multimodal learning. Its multi-task learning strategy enables the model to capture complementary information across modalities, thus enhancing generalization capabilities. Experiments on a public driver EEG dataset show that MFE-Driver achieves excellent classification performance, supporting studies of driver behavior and emotional states.

 

 

All Content © PaperCept, Inc.


This site is protected by copyright and trademark laws under US and International law.
All rights reserved. © 2002-2025 PaperCept, Inc.
Page generated 2025-10-18  21:31:43 PST  Terms of use