Decoding Driving Intentions via a Novel Brain–Computer Interface Paradigm With Low Cognitive Load and High Robustness

IF 8.7 1区 计算机科学 Q1 AUTOMATION & CONTROL SYSTEMS
Jianxiang Sun;Zongtan Zhou;Yadong Liu;Daxue Liu;Haoqiang Chen;Yingxin Liu;Dewen Hu
{"title":"Decoding Driving Intentions via a Novel Brain–Computer Interface Paradigm With Low Cognitive Load and High Robustness","authors":"Jianxiang Sun;Zongtan Zhou;Yadong Liu;Daxue Liu;Haoqiang Chen;Yingxin Liu;Dewen Hu","doi":"10.1109/TSMC.2026.3657849","DOIUrl":null,"url":null,"abstract":"In recent years, brain–computer interface (BCI) based on electroencephalography (EEG) has been increasingly applied in human–vehicle collaborative driving. In this article, we design a novel BCI paradigm, incorporating subliminal steady state visual evoked potential (SSVEP) within the friendly interaction framework of short driving videos, which ensures low cognitive load interaction for drivers while also enhancing the robustness of EEG decoding. To robustly decode these signals, we propose a novel multidomain spatial–frequency–temporal multiscale gating convolutional neural network (SFT-GCNN), which explicitly addresses EEG nonstationarity and subject variability through three key innovations: 1) a channel-wise attention mechanism to extract task-relevant spatial topologies; 2) a multiscale gating convolutional unit (GCU) that adaptively filters noise and captures temporal dynamics across diverse receptive fields; and 3) a multiview fusion strategy integrating spatial, temporal, and spectral features under the joint supervision of cross-entropy (CE) and center loss to enforce intraclass compactness. The proposed decoding method outperforms several benchmark methods, achieving accuracies of 82.91% <inline-formula> <tex-math>$\\pm ~4.35$ </tex-math></inline-formula>% and 78.23% <inline-formula> <tex-math>$\\pm ~1.87$ </tex-math></inline-formula>% in the subject-dependent and subject-independent experiments. Furthermore, the subjective fatigue scales and a mean theta-to-alpha ratio (TAR) of 1.13 confirm that the proposed stimulus paradigm does not induce additional visual fatigue to participants. These results demonstrate that our approach effectively balances high decoding robustness with user comfort in practical driving scenarios.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"56 5","pages":"3235-3249"},"PeriodicalIF":8.7000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11372610/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

In recent years, brain–computer interface (BCI) based on electroencephalography (EEG) has been increasingly applied in human–vehicle collaborative driving. In this article, we design a novel BCI paradigm, incorporating subliminal steady state visual evoked potential (SSVEP) within the friendly interaction framework of short driving videos, which ensures low cognitive load interaction for drivers while also enhancing the robustness of EEG decoding. To robustly decode these signals, we propose a novel multidomain spatial–frequency–temporal multiscale gating convolutional neural network (SFT-GCNN), which explicitly addresses EEG nonstationarity and subject variability through three key innovations: 1) a channel-wise attention mechanism to extract task-relevant spatial topologies; 2) a multiscale gating convolutional unit (GCU) that adaptively filters noise and captures temporal dynamics across diverse receptive fields; and 3) a multiview fusion strategy integrating spatial, temporal, and spectral features under the joint supervision of cross-entropy (CE) and center loss to enforce intraclass compactness. The proposed decoding method outperforms several benchmark methods, achieving accuracies of 82.91% $\pm ~4.35$ % and 78.23% $\pm ~1.87$ % in the subject-dependent and subject-independent experiments. Furthermore, the subjective fatigue scales and a mean theta-to-alpha ratio (TAR) of 1.13 confirm that the proposed stimulus paradigm does not induce additional visual fatigue to participants. These results demonstrate that our approach effectively balances high decoding robustness with user comfort in practical driving scenarios.
基于低认知负荷、高鲁棒性的新型脑机接口范式的驾驶意图解码
近年来,基于脑电图(EEG)的脑机接口(BCI)在人车协同驾驶中得到越来越多的应用。在本文中,我们设计了一种新的脑机接口范式,将阈下稳态视觉诱发电位(SSVEP)整合到短视频友好交互框架中,既保证了驾驶员的低认知负荷交互,又增强了脑电解码的鲁棒性。为了稳健地解码这些信号,我们提出了一种新的多域空间-频率-时间多尺度门控卷积神经网络(SFT-GCNN),该网络通过三个关键创新明确解决了脑电图的非平稳性和受试者可变性:1)通道智能注意机制提取任务相关的空间拓扑;2)一种多尺度门控卷积单元(GCU),可自适应过滤噪声并捕获不同感受野的时间动态;3)在交叉熵(cross-entropy, CE)和中心损失的联合监督下,融合空间、时间和光谱特征的多视图融合策略,以增强类内紧密性。本文提出的解码方法优于几种基准方法,在受试者依赖和受试者独立实验中,准确率分别为82.91% ~4.35 %和78.23% ~1.87 %。此外,主观疲劳量表和平均theta-to-alpha比(TAR)为1.13,证实了所提出的刺激范式不会引起参与者额外的视觉疲劳。这些结果表明,我们的方法在实际驾驶场景中有效地平衡了高解码鲁棒性和用户舒适性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Systems Man Cybernetics-Systems
IEEE Transactions on Systems Man Cybernetics-Systems AUTOMATION & CONTROL SYSTEMS-COMPUTER SCIENCE, CYBERNETICS
CiteScore
18.50
自引率
11.50%
发文量
812
审稿时长
6 months
期刊介绍: The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书