Contrastive fine-grained domain adaptation network for EEG-based vigilance estimation.

IF 6 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Neural Networks Pub Date : 2024-11-01 Epub Date: 2024-08-08 DOI:10.1016/j.neunet.2024.106617
Kangning Wang, Wei Wei, Weibo Yi, Shuang Qiu, Huiguang He, Minpeng Xu, Dong Ming
{"title":"Contrastive fine-grained domain adaptation network for EEG-based vigilance estimation.","authors":"Kangning Wang, Wei Wei, Weibo Yi, Shuang Qiu, Huiguang He, Minpeng Xu, Dong Ming","doi":"10.1016/j.neunet.2024.106617","DOIUrl":null,"url":null,"abstract":"<p><p>Vigilance state is crucial for the effective performance of users in brain-computer interface (BCI) systems. Most vigilance estimation methods rely on a large amount of labeled data to train a satisfactory model for the specific subject, which limits the practical application of the methods. This study aimed to build a reliable vigilance estimation method using a small amount of unlabeled calibration data. We conducted a vigilance experiment in the designed BCI-based cursor-control task. Electroencephalogram (EEG) signals of eighteen participants were recorded in two sessions on two different days. And, we proposed a contrastive fine-grained domain adaptation network (CFGDAN) for vigilance estimation. Here, an adaptive graph convolution network (GCN) was built to project the EEG data of different domains into a common space. The fine-grained feature alignment mechanism was designed to weight and align the feature distributions across domains at the EEG channel level, and the contrastive information preservation module was developed to preserve the useful target-specific information during the feature alignment. The experimental results show that the proposed CFGDAN outperforms the compared methods in our BCI vigilance dataset and SEED-VIG dataset. Moreover, the visualization results demonstrate the efficacy of the designed feature alignment mechanisms. These results indicate the effectiveness of our method for vigilance estimation. Our study is helpful for reducing calibration efforts and promoting the practical application potential of vigilance estimation methods.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"179 ","pages":"106617"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106617","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/8/8 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Vigilance state is crucial for the effective performance of users in brain-computer interface (BCI) systems. Most vigilance estimation methods rely on a large amount of labeled data to train a satisfactory model for the specific subject, which limits the practical application of the methods. This study aimed to build a reliable vigilance estimation method using a small amount of unlabeled calibration data. We conducted a vigilance experiment in the designed BCI-based cursor-control task. Electroencephalogram (EEG) signals of eighteen participants were recorded in two sessions on two different days. And, we proposed a contrastive fine-grained domain adaptation network (CFGDAN) for vigilance estimation. Here, an adaptive graph convolution network (GCN) was built to project the EEG data of different domains into a common space. The fine-grained feature alignment mechanism was designed to weight and align the feature distributions across domains at the EEG channel level, and the contrastive information preservation module was developed to preserve the useful target-specific information during the feature alignment. The experimental results show that the proposed CFGDAN outperforms the compared methods in our BCI vigilance dataset and SEED-VIG dataset. Moreover, the visualization results demonstrate the efficacy of the designed feature alignment mechanisms. These results indicate the effectiveness of our method for vigilance estimation. Our study is helpful for reducing calibration efforts and promoting the practical application potential of vigilance estimation methods.

基于脑电图的警觉性估计的对比性细粒度域适应网络。
警觉状态对于脑机接口(BCI)系统中用户的有效表现至关重要。大多数警觉性估计方法都依赖于大量标记数据来为特定对象训练一个令人满意的模型,这限制了这些方法的实际应用。本研究旨在利用少量非标记校准数据建立一种可靠的警觉性估计方法。我们在设计的基于BCI的光标控制任务中进行了警觉性实验。我们在两个不同的日期分两次记录了18名参与者的脑电图(EEG)信号。然后,我们提出了一种用于警觉性估计的对比度细粒度域自适应网络(CFGDAN)。在这里,我们建立了一个自适应图卷积网络(GCN),将不同域的脑电图数据投射到一个共同的空间。设计了细粒度特征对齐机制,以在脑电图通道级别对不同域的特征分布进行加权和对齐,并开发了对比信息保存模块,以在特征对齐过程中保存有用的目标特定信息。实验结果表明,在我们的 BCI 警戒数据集和 SEED-VIG 数据集中,所提出的 CFGDAN 优于同类方法。此外,可视化结果也证明了所设计的特征配准机制的有效性。这些结果表明了我们的方法在警觉性估计方面的有效性。我们的研究有助于减少校准工作,提高警觉性估计方法的实际应用潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Neural Networks
Neural Networks 工程技术-计算机:人工智能
CiteScore
13.90
自引率
7.70%
发文量
425
审稿时长
67 days
期刊介绍: Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信