稀疏重构图用于可推广的fMRI分析。

Camila González, Yanis Miraoui, Yiran Fan, Ehsan Adeli, Kilian M Pohl
{"title":"稀疏重构图用于可推广的fMRI分析。","authors":"Camila González, Yanis Miraoui, Yiran Fan, Ehsan Adeli, Kilian M Pohl","doi":"10.1007/978-3-031-78761-4_5","DOIUrl":null,"url":null,"abstract":"<p><p>Deep learning can help uncover patterns in resting-state functional Magnetic Resonance Imaging (rs-fMRI) associated with psychiatric disorders and personal traits. Yet the problem of interpreting deep learning findings is rarely more evident than in fMRI analyses, as the data is sensitive to scanning effects and inherently difficult to visualize. We propose a simple approach to mitigate these challenges grounded on sparsification and self-supervision. Instead of extracting post-hoc feature attributions to uncover functional connections that are important to the target task, we identify a small subset of highly informative connections during training and occlude the rest. To this end, we jointly train a (1) sparse input mask, (2) variational autoencoder (VAE), and (3) downstream classifier in an end-to-end fashion. While we need a portion of labeled samples to train the classifier, we optimize the sparse mask and VAE with unlabeled data from additional acquisition sites, retaining only the input features that generalize well. We evaluate our method - <b>Spa</b>rsely <b>R</b>econstructed <b>G</b>raphs (<b>SpaRG</b>) - on the public ABIDE dataset for the task of sex classification, training with labeled cases from 18 sites and adapting the model to two additional out-of-distribution sites with a portion of unlabeled samples. For a relatively coarse parcellation (64 regions), SpaRG utilizes only 1% of the original connections while improving the classification accuracy across domains. Our code can be found at www.github.com/yanismiraoui/SpaRG.</p>","PeriodicalId":520367,"journal":{"name":"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)","volume":"15266 ","pages":"46-56"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11694515/pdf/","citationCount":"0","resultStr":"{\"title\":\"SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis.\",\"authors\":\"Camila González, Yanis Miraoui, Yiran Fan, Ehsan Adeli, Kilian M Pohl\",\"doi\":\"10.1007/978-3-031-78761-4_5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Deep learning can help uncover patterns in resting-state functional Magnetic Resonance Imaging (rs-fMRI) associated with psychiatric disorders and personal traits. Yet the problem of interpreting deep learning findings is rarely more evident than in fMRI analyses, as the data is sensitive to scanning effects and inherently difficult to visualize. We propose a simple approach to mitigate these challenges grounded on sparsification and self-supervision. Instead of extracting post-hoc feature attributions to uncover functional connections that are important to the target task, we identify a small subset of highly informative connections during training and occlude the rest. To this end, we jointly train a (1) sparse input mask, (2) variational autoencoder (VAE), and (3) downstream classifier in an end-to-end fashion. While we need a portion of labeled samples to train the classifier, we optimize the sparse mask and VAE with unlabeled data from additional acquisition sites, retaining only the input features that generalize well. We evaluate our method - <b>Spa</b>rsely <b>R</b>econstructed <b>G</b>raphs (<b>SpaRG</b>) - on the public ABIDE dataset for the task of sex classification, training with labeled cases from 18 sites and adapting the model to two additional out-of-distribution sites with a portion of unlabeled samples. For a relatively coarse parcellation (64 regions), SpaRG utilizes only 1% of the original connections while improving the classification accuracy across domains. Our code can be found at www.github.com/yanismiraoui/SpaRG.</p>\",\"PeriodicalId\":520367,\"journal\":{\"name\":\"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)\",\"volume\":\"15266 \",\"pages\":\"46-56\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11694515/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1007/978-3-031-78761-4_5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/12/6 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in clinical neuroimaging : 7th international workshop, MLCN 2024, held in conjunction with MICCAI 2024, Marrakesh, Morocco, October 10, 2024, proceedings. MLCN (Workshop) (7th : 2024 : Marrakesh, Morocco)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-031-78761-4_5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/12/6 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

深度学习可以帮助发现与精神疾病和个人特征相关的静息状态功能性磁共振成像(rs-fMRI)模式。然而,解释深度学习发现的问题很少比在功能磁共振成像分析中更明显,因为数据对扫描效果很敏感,并且本质上难以可视化。我们提出了一种基于分散和自我监督的简单方法来缓解这些挑战。我们不是提取事后特征属性来发现对目标任务很重要的功能连接,而是在训练过程中识别一小部分高信息量的连接,并遮挡其余的连接。为此,我们以端到端方式联合训练(1)稀疏输入掩码,(2)变分自编码器(VAE)和(3)下游分类器。虽然我们需要一部分标记样本来训练分类器,但我们使用来自其他采集站点的未标记数据来优化稀疏掩码和VAE,仅保留泛化良好的输入特征。我们评估了我们的方法-稀疏重建图(SpaRG) -在公共遵守数据集上进行性别分类任务,使用来自18个站点的标记案例进行训练,并使模型适应另外两个具有部分未标记样本的分布外站点。对于相对粗糙的分割(64个区域),SpaRG只利用了原始连接的1%,同时提高了跨域的分类精度。我们的代码可以在www.github.com/yanismiraoui/SpaRG上找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
SpaRG: Sparsely Reconstructed Graphs for Generalizable fMRI Analysis.

Deep learning can help uncover patterns in resting-state functional Magnetic Resonance Imaging (rs-fMRI) associated with psychiatric disorders and personal traits. Yet the problem of interpreting deep learning findings is rarely more evident than in fMRI analyses, as the data is sensitive to scanning effects and inherently difficult to visualize. We propose a simple approach to mitigate these challenges grounded on sparsification and self-supervision. Instead of extracting post-hoc feature attributions to uncover functional connections that are important to the target task, we identify a small subset of highly informative connections during training and occlude the rest. To this end, we jointly train a (1) sparse input mask, (2) variational autoencoder (VAE), and (3) downstream classifier in an end-to-end fashion. While we need a portion of labeled samples to train the classifier, we optimize the sparse mask and VAE with unlabeled data from additional acquisition sites, retaining only the input features that generalize well. We evaluate our method - Sparsely Reconstructed Graphs (SpaRG) - on the public ABIDE dataset for the task of sex classification, training with labeled cases from 18 sites and adapting the model to two additional out-of-distribution sites with a portion of unlabeled samples. For a relatively coarse parcellation (64 regions), SpaRG utilizes only 1% of the original connections while improving the classification accuracy across domains. Our code can be found at www.github.com/yanismiraoui/SpaRG.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信