利用头皮脑电图数据的深度尺度不变时间历史改进脑机接口

Gaurav Anand, A. Ansari, Bev Dobrenz, Yibo Wang, Brandon G. Jacques, P. Sederberg
{"title":"利用头皮脑电图数据的深度尺度不变时间历史改进脑机接口","authors":"Gaurav Anand, A. Ansari, Bev Dobrenz, Yibo Wang, Brandon G. Jacques, P. Sederberg","doi":"10.1109/SIEDS52267.2021.9483789","DOIUrl":null,"url":null,"abstract":"Brain Computer Interface (BCI) applications employ machine learning to decode neural signals through time to generate actions. One issue facing such machine learning algorithms is how much of the past they need to decode the present. DeepSITH (Deep Scale-Invariant Temporal History), is a deep neural network with layers inspired by how the mammalian brain represents recent vs. less-recent experience. A single SITH layer maintains a log-compressed representation of the past that becomes less accurate with older events, unlike other approaches that maintain a perfect copy of events regardless of how far in the past they occurred. By stacking layers of this compressed representation, we hypothesized that DeepSITH would be able to decode patterns of neural activity from farther in the past and combine them efficiently to guide the BCI in the present. We tested our approach with the Kaggle \"Grasp and Lift challenge\" dataset. This motor movement dataset has 12 subjects, 10 series of 30 grasp and lift trials per subject, with 6 classes of events to decode. We benchmark DeepSITH performances on this dataset against another common machine learning technique for integrating features over extended time scales, long short-term memory (LSTM). DeepSITH reproducibly achieves higher accuracy in predicting motor movement events than LSTM, and also takes significantly fewer epochs and less memory to train, in comparison to LSTM. In summary, DeepSITH can efficiently process more data, with increased prediction accuracy and learning speed. This result shows that DeepSITH is an advantageous model to consider when developing BCI technologies.","PeriodicalId":426747,"journal":{"name":"2021 Systems and Information Engineering Design Symposium (SIEDS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Improving Brain Computer Interfaces Using Deep Scale-Invariant Temporal History Applied to Scalp Electroencephalogram Data\",\"authors\":\"Gaurav Anand, A. Ansari, Bev Dobrenz, Yibo Wang, Brandon G. Jacques, P. Sederberg\",\"doi\":\"10.1109/SIEDS52267.2021.9483789\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Brain Computer Interface (BCI) applications employ machine learning to decode neural signals through time to generate actions. One issue facing such machine learning algorithms is how much of the past they need to decode the present. DeepSITH (Deep Scale-Invariant Temporal History), is a deep neural network with layers inspired by how the mammalian brain represents recent vs. less-recent experience. A single SITH layer maintains a log-compressed representation of the past that becomes less accurate with older events, unlike other approaches that maintain a perfect copy of events regardless of how far in the past they occurred. By stacking layers of this compressed representation, we hypothesized that DeepSITH would be able to decode patterns of neural activity from farther in the past and combine them efficiently to guide the BCI in the present. We tested our approach with the Kaggle \\\"Grasp and Lift challenge\\\" dataset. This motor movement dataset has 12 subjects, 10 series of 30 grasp and lift trials per subject, with 6 classes of events to decode. We benchmark DeepSITH performances on this dataset against another common machine learning technique for integrating features over extended time scales, long short-term memory (LSTM). DeepSITH reproducibly achieves higher accuracy in predicting motor movement events than LSTM, and also takes significantly fewer epochs and less memory to train, in comparison to LSTM. In summary, DeepSITH can efficiently process more data, with increased prediction accuracy and learning speed. This result shows that DeepSITH is an advantageous model to consider when developing BCI technologies.\",\"PeriodicalId\":426747,\"journal\":{\"name\":\"2021 Systems and Information Engineering Design Symposium (SIEDS)\",\"volume\":\"11 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 Systems and Information Engineering Design Symposium (SIEDS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SIEDS52267.2021.9483789\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 Systems and Information Engineering Design Symposium (SIEDS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIEDS52267.2021.9483789","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

脑机接口(BCI)应用程序利用机器学习通过时间解码神经信号来生成动作。这种机器学习算法面临的一个问题是,它们需要多少过去来解码现在。DeepSITH (Deep Scale-Invariant Temporal History)是一种深度神经网络,它的灵感来自于哺乳动物大脑如何代表最近和最近的经历。单一的SITH层维护过去的日志压缩表示,对于较旧的事件,这种表示变得不那么准确,而其他方法则保持事件的完美副本,而不管它们发生在过去多久。通过堆叠这种压缩表示的层,我们假设DeepSITH将能够解码过去更远的神经活动模式,并将它们有效地组合起来,以指导当前的BCI。我们用Kaggle“抓取和提升挑战”数据集测试了我们的方法。该运动数据集有12个被试,每个被试有10个系列的30个抓举实验,有6类事件需要解码。我们将DeepSITH在该数据集上的性能与另一种常见的机器学习技术(长短期记忆(LSTM))进行比较,该技术用于在延长的时间尺度上集成特征。与LSTM相比,DeepSITH在预测运动事件方面可重复性地达到更高的准确性,并且与LSTM相比,训练所需的epoch和内存也显著减少。总之,DeepSITH可以有效地处理更多的数据,提高预测精度和学习速度。这一结果表明,在开发脑机接口技术时,DeepSITH是一个有利的模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Improving Brain Computer Interfaces Using Deep Scale-Invariant Temporal History Applied to Scalp Electroencephalogram Data
Brain Computer Interface (BCI) applications employ machine learning to decode neural signals through time to generate actions. One issue facing such machine learning algorithms is how much of the past they need to decode the present. DeepSITH (Deep Scale-Invariant Temporal History), is a deep neural network with layers inspired by how the mammalian brain represents recent vs. less-recent experience. A single SITH layer maintains a log-compressed representation of the past that becomes less accurate with older events, unlike other approaches that maintain a perfect copy of events regardless of how far in the past they occurred. By stacking layers of this compressed representation, we hypothesized that DeepSITH would be able to decode patterns of neural activity from farther in the past and combine them efficiently to guide the BCI in the present. We tested our approach with the Kaggle "Grasp and Lift challenge" dataset. This motor movement dataset has 12 subjects, 10 series of 30 grasp and lift trials per subject, with 6 classes of events to decode. We benchmark DeepSITH performances on this dataset against another common machine learning technique for integrating features over extended time scales, long short-term memory (LSTM). DeepSITH reproducibly achieves higher accuracy in predicting motor movement events than LSTM, and also takes significantly fewer epochs and less memory to train, in comparison to LSTM. In summary, DeepSITH can efficiently process more data, with increased prediction accuracy and learning speed. This result shows that DeepSITH is an advantageous model to consider when developing BCI technologies.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信