Decoding Categories from Human Brain Activity in the Human Visual Cortex Using the Triplet Network

Lulu Hu, Jingwei Li, Chi Zhang, Li Tong
{"title":"Decoding Categories from Human Brain Activity in the Human Visual Cortex Using the Triplet Network","authors":"Lulu Hu, Jingwei Li, Chi Zhang, Li Tong","doi":"10.1145/3448748.3448769","DOIUrl":null,"url":null,"abstract":"Decoding visual stimuli from functional magnetic resonance imaging (fMRI) is of great significance for understanding the neural mechanism of the visual information processing in the human brain. How to extract effective information from massive voxel data in the brain to predict the brain state is a problem worth discussing in fMRI. However, the inherent characteristics of small quantity and high dimensionality in fMRI data limited the performance of brain decoding. As an effective way to acquire visual information, people usually compare with the prior knowledge learned when recognizing objects, and does not need to have a complete understanding of visual information. In this paper, we proposed a new visual classification model to decode the stimulus categories from the visual information of the brain based on the triplet network. The triplet network is a model framework with a comparison mechanism similar to that of human visual recognition objects, contains three-branches weight sharing subnetworks, which are composed of fully connected networks in our model. Our results showed that the decoding accuracy is 57.5±1.86% and 44.17±1.31% for S1 and S2, respectively. S1 was about 6% higher than the best traditional machine learning classifier SVM, while S2 was nearly 3.5% higher than SVM. Our results fully confirmed the validity of comparing the differences between samples in fMRI data with small quantity.","PeriodicalId":115821,"journal":{"name":"Proceedings of the 2021 International Conference on Bioinformatics and Intelligent Computing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2021 International Conference on Bioinformatics and Intelligent Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3448748.3448769","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Decoding visual stimuli from functional magnetic resonance imaging (fMRI) is of great significance for understanding the neural mechanism of the visual information processing in the human brain. How to extract effective information from massive voxel data in the brain to predict the brain state is a problem worth discussing in fMRI. However, the inherent characteristics of small quantity and high dimensionality in fMRI data limited the performance of brain decoding. As an effective way to acquire visual information, people usually compare with the prior knowledge learned when recognizing objects, and does not need to have a complete understanding of visual information. In this paper, we proposed a new visual classification model to decode the stimulus categories from the visual information of the brain based on the triplet network. The triplet network is a model framework with a comparison mechanism similar to that of human visual recognition objects, contains three-branches weight sharing subnetworks, which are composed of fully connected networks in our model. Our results showed that the decoding accuracy is 57.5±1.86% and 44.17±1.31% for S1 and S2, respectively. S1 was about 6% higher than the best traditional machine learning classifier SVM, while S2 was nearly 3.5% higher than SVM. Our results fully confirmed the validity of comparing the differences between samples in fMRI data with small quantity.
利用三重网络解码人类视觉皮层中人脑活动的分类
从功能磁共振成像(fMRI)中解码视觉刺激对于理解人脑视觉信息加工的神经机制具有重要意义。如何从海量的大脑体素数据中提取有效信息来预测大脑状态是功能磁共振成像中值得探讨的问题。然而,fMRI数据量小、维数高的固有特点限制了脑解码的性能。作为一种获取视觉信息的有效方式,人们在识别物体时通常会与所学到的先验知识进行比较,并不需要对视觉信息有完整的理解。本文提出了一种基于三重网络的视觉分类模型,从大脑的视觉信息中解码刺激类别。三联体网络是一种具有类似于人类视觉识别对象比较机制的模型框架,包含三个分支权重共享子网络,在我们的模型中由完全连接的网络组成。结果表明,S1和S2的解码精度分别为57.5±1.86%和44.17±1.31%。S1比最佳传统机器学习分类器SVM高约6%,S2比SVM高近3.5%。我们的研究结果充分证实了fMRI小样本数据间差异比较的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信