Multimodal Wearable-Based Automated Driver Inattention State Assessment Using Multidevices and Novel Cross-Modal Attention Framework

IF 2.2 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Kaveti Pavan;Ankit Singh;Digvijay S. Pawar;Nagarajan Ganapathy
{"title":"Multimodal Wearable-Based Automated Driver Inattention State Assessment Using Multidevices and Novel Cross-Modal Attention Framework","authors":"Kaveti Pavan;Ankit Singh;Digvijay S. Pawar;Nagarajan Ganapathy","doi":"10.1109/LSENS.2025.3596610","DOIUrl":null,"url":null,"abstract":"Driver inattention detection remains a critical challenge in driver's well-being, requiring robust systems that can distinguish stress-induced mental load during naturalistic driving. Current approaches face limitations in multiple wearable-based data fusion and real-time biosignals assessment. This letter proposes a novel cross-squeeze-and-excitation convolution neural network (CNN) framework to process simultaneously acquired multiple wearable from 15 participants in controlled driving scenarios. The multimodal signals are applied to multistage attention mechanisms [electrocardiogram (ECG)<inline-formula><tex-math>$\\leftrightarrow$</tex-math></inline-formula>electrodermal activity (EDA), ECG <inline-formula><tex-math>$\\rightarrow$</tex-math></inline-formula> EDA, EDA <inline-formula><tex-math>$\\rightarrow$</tex-math></inline-formula> ECG] with 1D-CNN blocks, optimized for 10-s signal segments. The proposed approach is able to classify drive inattention state. It is observed that ECG <inline-formula><tex-math>$\\rightarrow$</tex-math></inline-formula> EDA attention achieves 76.54% average accuracy using leave-one-subject-out cross validation, outperforming unimodal approaches by 12.4% and bidirectional attention by 4.8%. Feature visualizations confirm enhanced pattern discrimination in inattention conditions. This letter advances driver health monitoring systems through effective wearable integration and adaptive feature weighting, with potential for edge deployment and clinical stress assessment applications","PeriodicalId":13014,"journal":{"name":"IEEE Sensors Letters","volume":"9 9","pages":"1-4"},"PeriodicalIF":2.2000,"publicationDate":"2025-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Sensors Letters","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11119294/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Driver inattention detection remains a critical challenge in driver's well-being, requiring robust systems that can distinguish stress-induced mental load during naturalistic driving. Current approaches face limitations in multiple wearable-based data fusion and real-time biosignals assessment. This letter proposes a novel cross-squeeze-and-excitation convolution neural network (CNN) framework to process simultaneously acquired multiple wearable from 15 participants in controlled driving scenarios. The multimodal signals are applied to multistage attention mechanisms [electrocardiogram (ECG)$\leftrightarrow$electrodermal activity (EDA), ECG $\rightarrow$ EDA, EDA $\rightarrow$ ECG] with 1D-CNN blocks, optimized for 10-s signal segments. The proposed approach is able to classify drive inattention state. It is observed that ECG $\rightarrow$ EDA attention achieves 76.54% average accuracy using leave-one-subject-out cross validation, outperforming unimodal approaches by 12.4% and bidirectional attention by 4.8%. Feature visualizations confirm enhanced pattern discrimination in inattention conditions. This letter advances driver health monitoring systems through effective wearable integration and adaptive feature weighting, with potential for edge deployment and clinical stress assessment applications
基于多设备和新型跨模态注意框架的多模态可穿戴自动驾驶驾驶员注意力不集中状态评估
驾驶员注意力不集中检测仍然是驾驶员健康的关键挑战,需要强大的系统能够在自然驾驶中区分压力引起的精神负荷。目前的方法在基于多种可穿戴设备的数据融合和实时生物信号评估方面存在局限性。这封信提出了一种新的交叉挤压和激励卷积神经网络(CNN)框架,用于处理在受控驾驶场景中同时获得的来自15名参与者的多个可穿戴设备。多模态信号被应用于多阶段注意机制[心电图(ECG) $\leftrightarrow$皮电活动(EDA), ECG $\rightarrow$ EDA, EDA $\rightarrow$ ECG]与1D-CNN块,优化为10-s信号段。该方法能够对驾驶不注意状态进行分类。观察心电图$\rightarrow$ EDA注意力达到76.54% average accuracy using leave-one-subject-out cross validation, outperforming unimodal approaches by 12.4% and bidirectional attention by 4.8%. Feature visualizations confirm enhanced pattern discrimination in inattention conditions. This letter advances driver health monitoring systems through effective wearable integration and adaptive feature weighting, with potential for edge deployment and clinical stress assessment applications
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Sensors Letters
IEEE Sensors Letters Engineering-Electrical and Electronic Engineering
CiteScore
3.50
自引率
7.10%
发文量
194
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信