Jianxiang Sun;Zongtan Zhou;Yadong Liu;Daxue Liu;Haoqiang Chen;Yingxin Liu;Dewen Hu
{"title":"Decoding Driving Intentions via a Novel Brain–Computer Interface Paradigm With Low Cognitive Load and High Robustness","authors":"Jianxiang Sun;Zongtan Zhou;Yadong Liu;Daxue Liu;Haoqiang Chen;Yingxin Liu;Dewen Hu","doi":"10.1109/TSMC.2026.3657849","DOIUrl":null,"url":null,"abstract":"In recent years, brain–computer interface (BCI) based on electroencephalography (EEG) has been increasingly applied in human–vehicle collaborative driving. In this article, we design a novel BCI paradigm, incorporating subliminal steady state visual evoked potential (SSVEP) within the friendly interaction framework of short driving videos, which ensures low cognitive load interaction for drivers while also enhancing the robustness of EEG decoding. To robustly decode these signals, we propose a novel multidomain spatial–frequency–temporal multiscale gating convolutional neural network (SFT-GCNN), which explicitly addresses EEG nonstationarity and subject variability through three key innovations: 1) a channel-wise attention mechanism to extract task-relevant spatial topologies; 2) a multiscale gating convolutional unit (GCU) that adaptively filters noise and captures temporal dynamics across diverse receptive fields; and 3) a multiview fusion strategy integrating spatial, temporal, and spectral features under the joint supervision of cross-entropy (CE) and center loss to enforce intraclass compactness. The proposed decoding method outperforms several benchmark methods, achieving accuracies of 82.91% <inline-formula> <tex-math>$\\pm ~4.35$ </tex-math></inline-formula>% and 78.23% <inline-formula> <tex-math>$\\pm ~1.87$ </tex-math></inline-formula>% in the subject-dependent and subject-independent experiments. Furthermore, the subjective fatigue scales and a mean theta-to-alpha ratio (TAR) of 1.13 confirm that the proposed stimulus paradigm does not induce additional visual fatigue to participants. These results demonstrate that our approach effectively balances high decoding robustness with user comfort in practical driving scenarios.","PeriodicalId":48915,"journal":{"name":"IEEE Transactions on Systems Man Cybernetics-Systems","volume":"56 5","pages":"3235-3249"},"PeriodicalIF":8.7000,"publicationDate":"2026-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Systems Man Cybernetics-Systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11372610/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/2/6 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
In recent years, brain–computer interface (BCI) based on electroencephalography (EEG) has been increasingly applied in human–vehicle collaborative driving. In this article, we design a novel BCI paradigm, incorporating subliminal steady state visual evoked potential (SSVEP) within the friendly interaction framework of short driving videos, which ensures low cognitive load interaction for drivers while also enhancing the robustness of EEG decoding. To robustly decode these signals, we propose a novel multidomain spatial–frequency–temporal multiscale gating convolutional neural network (SFT-GCNN), which explicitly addresses EEG nonstationarity and subject variability through three key innovations: 1) a channel-wise attention mechanism to extract task-relevant spatial topologies; 2) a multiscale gating convolutional unit (GCU) that adaptively filters noise and captures temporal dynamics across diverse receptive fields; and 3) a multiview fusion strategy integrating spatial, temporal, and spectral features under the joint supervision of cross-entropy (CE) and center loss to enforce intraclass compactness. The proposed decoding method outperforms several benchmark methods, achieving accuracies of 82.91% $\pm ~4.35$ % and 78.23% $\pm ~1.87$ % in the subject-dependent and subject-independent experiments. Furthermore, the subjective fatigue scales and a mean theta-to-alpha ratio (TAR) of 1.13 confirm that the proposed stimulus paradigm does not induce additional visual fatigue to participants. These results demonstrate that our approach effectively balances high decoding robustness with user comfort in practical driving scenarios.
期刊介绍:
The IEEE Transactions on Systems, Man, and Cybernetics: Systems encompasses the fields of systems engineering, covering issue formulation, analysis, and modeling throughout the systems engineering lifecycle phases. It addresses decision-making, issue interpretation, systems management, processes, and various methods such as optimization, modeling, and simulation in the development and deployment of large systems.