An Intelligent Camera-Based Contactless Driver Stress State Monitoring Using Multimodality Fusion

IF 2.2 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Swarubini P J;Thomas M. Deserno;Nagarajan Ganapathy
{"title":"An Intelligent Camera-Based Contactless Driver Stress State Monitoring Using Multimodality Fusion","authors":"Swarubini P J;Thomas M. Deserno;Nagarajan Ganapathy","doi":"10.1109/LSENS.2025.3595917","DOIUrl":null,"url":null,"abstract":"Driver stress involves complex psychological, physiological, and behavioral responses to stressors across different mobility spaces, which leads to road accidents. Recently, noncontact sensing-derived biosignals have been explored in mental health assessment. However, camera-based biosignals in mobility environments is still challenging. In this study, we aim to classify the driver stress using imaging photoplethysmography (iPPG) signals, facial keypoints, and fusion-based convolutional neural network (CNN). For this, we acquired infrared facial videos from healthy subjects (<italic>N</i>=20) in simulated driving. iPPG signals and facial keypoints were extracted using the local group invariance method and CNN, respectively. The iPPG signals were processed with a 1-D CNN, and facial keypoints with a 2-D CNN for feature learning. The proposed approach is able to classify between the drivers' stress states. Experimental results show that the proposed fusion approach achieved an mean classification accuracy (ACC) and F1-score of 87.00% and 86.33%, respectively. The iPPG signals demonstrated a better mean ACC (90.00%) and F1-score (90.33%) among the individual models. Thus, the framework could be extended for driver stress detection in real-time scenarios enabling early stress detection.","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 9","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11113323/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Driver stress involves complex psychological, physiological, and behavioral responses to stressors across different mobility spaces, which leads to road accidents. Recently, noncontact sensing-derived biosignals have been explored in mental health assessment. However, camera-based biosignals in mobility environments is still challenging. In this study, we aim to classify the driver stress using imaging photoplethysmography (iPPG) signals, facial keypoints, and fusion-based convolutional neural network (CNN). For this, we acquired infrared facial videos from healthy subjects (N=20) in simulated driving. iPPG signals and facial keypoints were extracted using the local group invariance method and CNN, respectively. The iPPG signals were processed with a 1-D CNN, and facial keypoints with a 2-D CNN for feature learning. The proposed approach is able to classify between the drivers' stress states. Experimental results show that the proposed fusion approach achieved an mean classification accuracy (ACC) and F1-score of 87.00% and 86.33%, respectively. The iPPG signals demonstrated a better mean ACC (90.00%) and F1-score (90.33%) among the individual models. Thus, the framework could be extended for driver stress detection in real-time scenarios enabling early stress detection.
基于多模态融合的智能摄像头非接触式驾驶员应力状态监测
驾驶员压力涉及对不同移动空间压力源的复杂心理、生理和行为反应,从而导致道路交通事故。近年来,非接触式传感生物信号在心理健康评估中的应用得到了广泛的探讨。然而,在移动环境中,基于相机的生物信号仍然具有挑战性。在这项研究中,我们的目的是利用成像光体积脉搏波(iPPG)信号、面部关键点和基于融合的卷积神经网络(CNN)来分类驾驶员的压力。为此,我们从健康受试者(N=20)的模拟驾驶中获取红外面部视频。分别使用局部群不变性法和CNN提取iPPG信号和面部关键点。对iPPG信号进行一维CNN处理,对面部关键点进行二维CNN处理,进行特征学习。该方法能够对驾驶员的压力状态进行分类。实验结果表明,该融合方法的平均分类准确率(ACC)和f1评分分别为87.00%和86.33%。iPPG信号在各模型中的平均ACC(90.00%)和f1评分(90.33%)较好。因此,该框架可以扩展到实时场景中的驾驶员应力检测,从而实现早期应力检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Sensors Letters
IEEE Sensors Letters Engineering-Electrical and Electronic Engineering
CiteScore
3.50
自引率
7.10%
发文量
194
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信