基于多模态训练的高效神经解码

IF 2.7 3区 医学 Q3 NEUROSCIENCES
Yun Wang
{"title":"基于多模态训练的高效神经解码","authors":"Yun Wang","doi":"10.3390/brainsci14100988","DOIUrl":null,"url":null,"abstract":"<p><strong>Background/objectives: </strong>Neural decoding methods are often limited by the performance of brain encoders, which map complex brain signals into a latent representation space of perception information. These brain encoders are constrained by the limited amount of paired brain and stimuli data available for training, making it challenging to learn rich neural representations.</p><p><strong>Methods: </strong>To address this limitation, we present a novel multimodal training approach using paired image and functional magnetic resonance imaging (fMRI) data to establish a brain masked autoencoder that learns the interactions between images and brain activities. Subsequently, we employ a diffusion model conditioned on brain data to decode realistic images.</p><p><strong>Results: </strong>Our method achieves high-quality decoding results in semantic contents and low-level visual attributes, outperforming previous methods both qualitatively and quantitatively, while maintaining computational efficiency. Additionally, our method is applied to decode artificial patterns across region of interests (ROIs) to explore their functional properties. We not only validate existing knowledge concerning ROIs but also unveil new insights, such as the synergy between early visual cortex and higher-level scene ROIs, as well as the competition within the higher-level scene ROIs.</p><p><strong>Conclusions: </strong>These findings provide valuable insights for future directions in the field of neural decoding.</p>","PeriodicalId":9095,"journal":{"name":"Brain Sciences","volume":"14 10","pages":""},"PeriodicalIF":2.7000,"publicationDate":"2024-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11506634/pdf/","citationCount":"0","resultStr":"{\"title\":\"Efficient Neural Decoding Based on Multimodal Training.\",\"authors\":\"Yun Wang\",\"doi\":\"10.3390/brainsci14100988\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background/objectives: </strong>Neural decoding methods are often limited by the performance of brain encoders, which map complex brain signals into a latent representation space of perception information. These brain encoders are constrained by the limited amount of paired brain and stimuli data available for training, making it challenging to learn rich neural representations.</p><p><strong>Methods: </strong>To address this limitation, we present a novel multimodal training approach using paired image and functional magnetic resonance imaging (fMRI) data to establish a brain masked autoencoder that learns the interactions between images and brain activities. Subsequently, we employ a diffusion model conditioned on brain data to decode realistic images.</p><p><strong>Results: </strong>Our method achieves high-quality decoding results in semantic contents and low-level visual attributes, outperforming previous methods both qualitatively and quantitatively, while maintaining computational efficiency. Additionally, our method is applied to decode artificial patterns across region of interests (ROIs) to explore their functional properties. We not only validate existing knowledge concerning ROIs but also unveil new insights, such as the synergy between early visual cortex and higher-level scene ROIs, as well as the competition within the higher-level scene ROIs.</p><p><strong>Conclusions: </strong>These findings provide valuable insights for future directions in the field of neural decoding.</p>\",\"PeriodicalId\":9095,\"journal\":{\"name\":\"Brain Sciences\",\"volume\":\"14 10\",\"pages\":\"\"},\"PeriodicalIF\":2.7000,\"publicationDate\":\"2024-09-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11506634/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Brain Sciences\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3390/brainsci14100988\",\"RegionNum\":3,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Brain Sciences","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3390/brainsci14100988","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 0

摘要

背景/目标:神经解码方法通常受到大脑编码器性能的限制,大脑编码器将复杂的大脑信号映射到感知信息的潜在表征空间中。这些脑编码器受到可用于训练的配对大脑和刺激数据量有限的限制,因此学习丰富的神经表征具有挑战性:为了解决这一局限性,我们提出了一种新颖的多模态训练方法,利用配对图像和功能性磁共振成像(fMRI)数据建立大脑屏蔽自动编码器,学习图像和大脑活动之间的相互作用。随后,我们利用以大脑数据为条件的扩散模型对现实图像进行解码:结果:我们的方法在语义内容和低级视觉属性方面实现了高质量的解码结果,在质量和数量上都优于之前的方法,同时保持了计算效率。此外,我们的方法还被应用于解码跨兴趣区域(ROI)的人工模式,以探索其功能特性。我们不仅验证了有关兴趣区域的现有知识,还揭示了新的见解,如早期视觉皮层与高层场景兴趣区域之间的协同作用,以及高层场景兴趣区域内部的竞争:这些发现为神经解码领域的未来发展方向提供了宝贵的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Efficient Neural Decoding Based on Multimodal Training.

Background/objectives: Neural decoding methods are often limited by the performance of brain encoders, which map complex brain signals into a latent representation space of perception information. These brain encoders are constrained by the limited amount of paired brain and stimuli data available for training, making it challenging to learn rich neural representations.

Methods: To address this limitation, we present a novel multimodal training approach using paired image and functional magnetic resonance imaging (fMRI) data to establish a brain masked autoencoder that learns the interactions between images and brain activities. Subsequently, we employ a diffusion model conditioned on brain data to decode realistic images.

Results: Our method achieves high-quality decoding results in semantic contents and low-level visual attributes, outperforming previous methods both qualitatively and quantitatively, while maintaining computational efficiency. Additionally, our method is applied to decode artificial patterns across region of interests (ROIs) to explore their functional properties. We not only validate existing knowledge concerning ROIs but also unveil new insights, such as the synergy between early visual cortex and higher-level scene ROIs, as well as the competition within the higher-level scene ROIs.

Conclusions: These findings provide valuable insights for future directions in the field of neural decoding.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Brain Sciences
Brain Sciences Neuroscience-General Neuroscience
CiteScore
4.80
自引率
9.10%
发文量
1472
审稿时长
18.71 days
期刊介绍: Brain Sciences (ISSN 2076-3425) is a peer-reviewed scientific journal that publishes original articles, critical reviews, research notes and short communications in the areas of cognitive neuroscience, developmental neuroscience, molecular and cellular neuroscience, neural engineering, neuroimaging, neurolinguistics, neuropathy, systems neuroscience, and theoretical and computational neuroscience. Our aim is to encourage scientists to publish their experimental and theoretical results in as much detail as possible. There is no restriction on the length of the papers. The full experimental details must be provided so that the results can be reproduced. Electronic files or software regarding the full details of the calculation and experimental procedure, if unable to be published in a normal way, can be deposited as supplementary material.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信