Subject-invariant feature learning for mTBI identification using LSTM-based variational autoencoder with adversarial regularization

IF 1.3 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC
Shiva Salsabilian, L. Najafizadeh
{"title":"Subject-invariant feature learning for mTBI identification using LSTM-based variational autoencoder with adversarial regularization","authors":"Shiva Salsabilian, L. Najafizadeh","doi":"10.3389/frsip.2022.1019253","DOIUrl":null,"url":null,"abstract":"Developing models for identifying mild traumatic brain injury (mTBI) has often been challenging due to large variations in data from subjects, resulting in difficulties for the mTBI-identification models to generalize to data from unseen subjects. To tackle this problem, we present a long short-term memory-based adversarial variational autoencoder (LSTM-AVAE) framework for subject-invariant mTBI feature extraction. In the proposed model, first, an LSTM variational autoencoder (LSTM-VAE) combines the representation learning ability of the variational autoencoder (VAE) with the temporal modeling characteristics of the LSTM to learn the latent space representations from neural activity. Then, to detach the subject’s individuality from neural feature representations, and make the model proper for cross-subject transfer learning, an adversary network is attached to the encoder in a discriminative setting. The model is trained using the 1 held-out approach. The trained encoder is then used to extract the representations from the held-out subject’s data. The extracted representations are then classified into normal and mTBI groups using different classifiers. The proposed model is evaluated on cortical recordings of Thy1-GCaMP6s transgenic mice obtained via widefield calcium imaging, prior to and after inducing injury. In cross-subject transfer learning experiment, the proposed LSTM-AVAE framework achieves classification accuracy results of 95.8% and 97.79%, without and with utilizing conditional VAE (cVAE), respectively, demonstrating that the proposed model is capable of learning invariant representations from mTBI data.","PeriodicalId":93557,"journal":{"name":"Frontiers in signal processing","volume":"86 1","pages":""},"PeriodicalIF":1.3000,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in signal processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/frsip.2022.1019253","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Developing models for identifying mild traumatic brain injury (mTBI) has often been challenging due to large variations in data from subjects, resulting in difficulties for the mTBI-identification models to generalize to data from unseen subjects. To tackle this problem, we present a long short-term memory-based adversarial variational autoencoder (LSTM-AVAE) framework for subject-invariant mTBI feature extraction. In the proposed model, first, an LSTM variational autoencoder (LSTM-VAE) combines the representation learning ability of the variational autoencoder (VAE) with the temporal modeling characteristics of the LSTM to learn the latent space representations from neural activity. Then, to detach the subject’s individuality from neural feature representations, and make the model proper for cross-subject transfer learning, an adversary network is attached to the encoder in a discriminative setting. The model is trained using the 1 held-out approach. The trained encoder is then used to extract the representations from the held-out subject’s data. The extracted representations are then classified into normal and mTBI groups using different classifiers. The proposed model is evaluated on cortical recordings of Thy1-GCaMP6s transgenic mice obtained via widefield calcium imaging, prior to and after inducing injury. In cross-subject transfer learning experiment, the proposed LSTM-AVAE framework achieves classification accuracy results of 95.8% and 97.79%, without and with utilizing conditional VAE (cVAE), respectively, demonstrating that the proposed model is capable of learning invariant representations from mTBI data.
基于lstm的对抗正则化变分自编码器mTBI识别的主体不变特征学习
由于来自受试者的数据差异很大,开发识别轻度创伤性脑损伤(mTBI)的模型通常具有挑战性,导致mTBI识别模型难以推广到来自未见受试者的数据。为了解决这个问题,我们提出了一个基于长短期记忆的对抗变分自编码器(LSTM-AVAE)框架,用于主题不变的mTBI特征提取。在该模型中,首先,LSTM变分自编码器(LSTM-VAE)将变分自编码器(VAE)的表征学习能力与LSTM的时间建模特性相结合,从神经活动中学习潜在空间表征。然后,为了将受试者的个性从神经特征表征中分离出来,并使模型适合跨主题迁移学习,在判别设置中将对手网络附加到编码器上。该模型使用1 - hold -out方法进行训练。然后使用经过训练的编码器从滞留对象的数据中提取表征。然后使用不同的分类器将提取的表示分类为正常组和mTBI组。在诱导损伤之前和之后,通过宽视场钙成像获得Thy1-GCaMP6s转基因小鼠的皮质记录来评估所提出的模型。在跨学科迁移学习实验中,LSTM-AVAE框架在不使用条件VAE (cVAE)和使用条件VAE (cVAE)的情况下,分类准确率分别达到95.8%和97.79%,表明该模型能够从mTBI数据中学习不变表征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信