不同视觉刺激材料引导下跨主体运动意象的运动认知解码。

IF 2.5 4区 医学 Q3 NEUROSCIENCES
Tian-Jian Luo, Jing Li, Rui Li, Xiang Zhang, Shen-Rui Wu, Hua Peng
{"title":"不同视觉刺激材料引导下跨主体运动意象的运动认知解码。","authors":"Tian-Jian Luo, Jing Li, Rui Li, Xiang Zhang, Shen-Rui Wu, Hua Peng","doi":"10.31083/j.jin2312218","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Motor imagery (MI) plays an important role in brain-computer interfaces, especially in evoking event-related desynchronization and synchronization (ERD/S) rhythms in electroencephalogram (EEG) signals. However, the procedure for performing a MI task for a single subject is subjective, making it difficult to determine the actual situation of an individual's MI task and resulting in significant individual EEG response variations during motion cognitive decoding.</p><p><strong>Methods: </strong>To explore this issue, we designed three visual stimuli (arrow, human, and robot), each of which was used to present three MI tasks (left arm, right arm, and feet), and evaluated differences in brain response in terms of ERD/S rhythms. To compare subject-specific variations of different visual stimuli, a novel cross-subject MI-EEG classification method was proposed for the three visual stimuli. The proposed method employed a covariance matrix centroid alignment for preprocessing of EEG samples, followed by a model agnostic meta-learning method for cross-subject MI-EEG classification.</p><p><strong>Results and conclusion: </strong>The experimental results showed that robot stimulus materials were better than arrow or human stimulus materials, with an optimal cross-subject motion cognitive decoding accuracy of 79.04%. Moreover, the proposed method produced robust classification of cross-subject MI-EEG signal decoding, showing superior results to conventional methods on collected EEG signals.</p>","PeriodicalId":16160,"journal":{"name":"Journal of integrative neuroscience","volume":"23 12","pages":"218"},"PeriodicalIF":2.5000,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Motion Cognitive Decoding of Cross-Subject Motor Imagery Guided on Different Visual Stimulus Materials.\",\"authors\":\"Tian-Jian Luo, Jing Li, Rui Li, Xiang Zhang, Shen-Rui Wu, Hua Peng\",\"doi\":\"10.31083/j.jin2312218\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Motor imagery (MI) plays an important role in brain-computer interfaces, especially in evoking event-related desynchronization and synchronization (ERD/S) rhythms in electroencephalogram (EEG) signals. However, the procedure for performing a MI task for a single subject is subjective, making it difficult to determine the actual situation of an individual's MI task and resulting in significant individual EEG response variations during motion cognitive decoding.</p><p><strong>Methods: </strong>To explore this issue, we designed three visual stimuli (arrow, human, and robot), each of which was used to present three MI tasks (left arm, right arm, and feet), and evaluated differences in brain response in terms of ERD/S rhythms. To compare subject-specific variations of different visual stimuli, a novel cross-subject MI-EEG classification method was proposed for the three visual stimuli. The proposed method employed a covariance matrix centroid alignment for preprocessing of EEG samples, followed by a model agnostic meta-learning method for cross-subject MI-EEG classification.</p><p><strong>Results and conclusion: </strong>The experimental results showed that robot stimulus materials were better than arrow or human stimulus materials, with an optimal cross-subject motion cognitive decoding accuracy of 79.04%. Moreover, the proposed method produced robust classification of cross-subject MI-EEG signal decoding, showing superior results to conventional methods on collected EEG signals.</p>\",\"PeriodicalId\":16160,\"journal\":{\"name\":\"Journal of integrative neuroscience\",\"volume\":\"23 12\",\"pages\":\"218\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-12-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of integrative neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.31083/j.jin2312218\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of integrative neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.31083/j.jin2312218","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

背景:运动想象(MI)在脑机接口中起着重要的作用,特别是在脑电图信号中引发事件相关的去同步和同步(ERD/S)节律。然而,单个被试执行MI任务的过程是主观的,很难确定个体MI任务的实际情况,导致运动认知解码过程中个体EEG反应的显著差异。方法:为了探讨这一问题,我们设计了三种视觉刺激(箭头、人类和机器人),每种视觉刺激都用于呈现三种MI任务(左臂、右臂和脚),并根据ERD/S节奏评估大脑反应的差异。为了比较不同视觉刺激的被试特异性变化,提出了一种新的三种视觉刺激的跨被试MI-EEG分类方法。该方法采用协方差矩阵质心对齐方法对脑电样本进行预处理,然后采用模型不可知的元学习方法进行跨主体MI-EEG分类。结果与结论:实验结果表明,机器人刺激材料优于箭或人体刺激材料,最佳跨主体运动认知解码准确率为79.04%。此外,该方法对跨主体MI-EEG信号解码具有鲁棒性分类,对采集到的脑电信号的分类效果优于传统方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Motion Cognitive Decoding of Cross-Subject Motor Imagery Guided on Different Visual Stimulus Materials.

Background: Motor imagery (MI) plays an important role in brain-computer interfaces, especially in evoking event-related desynchronization and synchronization (ERD/S) rhythms in electroencephalogram (EEG) signals. However, the procedure for performing a MI task for a single subject is subjective, making it difficult to determine the actual situation of an individual's MI task and resulting in significant individual EEG response variations during motion cognitive decoding.

Methods: To explore this issue, we designed three visual stimuli (arrow, human, and robot), each of which was used to present three MI tasks (left arm, right arm, and feet), and evaluated differences in brain response in terms of ERD/S rhythms. To compare subject-specific variations of different visual stimuli, a novel cross-subject MI-EEG classification method was proposed for the three visual stimuli. The proposed method employed a covariance matrix centroid alignment for preprocessing of EEG samples, followed by a model agnostic meta-learning method for cross-subject MI-EEG classification.

Results and conclusion: The experimental results showed that robot stimulus materials were better than arrow or human stimulus materials, with an optimal cross-subject motion cognitive decoding accuracy of 79.04%. Moreover, the proposed method produced robust classification of cross-subject MI-EEG signal decoding, showing superior results to conventional methods on collected EEG signals.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.80
自引率
5.60%
发文量
173
审稿时长
2 months
期刊介绍: JIN is an international peer-reviewed, open access journal. JIN publishes leading-edge research at the interface of theoretical and experimental neuroscience, focusing across hierarchical levels of brain organization to better understand how diverse functions are integrated. We encourage submissions from scientists of all specialties that relate to brain functioning.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信