Confound Controlled Multimodal Neuroimaging Data Fusion and Its Application to Developmental Disorders

IF 13.7
Chuang Liang;Rogers F. Silva;Tülay Adali;Rongtao Jiang;Daoqiang Zhang;Shile Qi;Vince D. Calhoun
{"title":"Confound Controlled Multimodal Neuroimaging Data Fusion and Its Application to Developmental Disorders","authors":"Chuang Liang;Rogers F. Silva;Tülay Adali;Rongtao Jiang;Daoqiang Zhang;Shile Qi;Vince D. Calhoun","doi":"10.1109/TIP.2025.3597045","DOIUrl":null,"url":null,"abstract":"Multimodal fusion provides multiple benefits over single modality analysis by leveraging both shared and complementary information from different modalities. Notably, supervised fusion enjoys extensive interest for capturing multimodal co-varying patterns associated with clinical measures. A key challenge of brain data analysis is how to handle confounds, which, if unaddressed, can lead to an unrealistic description of the relationship between the brain and clinical measures. Current approaches often rely on linear regression to remove covariate effects prior to fusion, which may lead to information loss, rather than pursue the more global strategy of optimizing both fusion and covariates removal simultaneously. Thus, we propose “CR-mCCAR” to jointly optimize for confounds within a guided fusion model, capturing co-varying multimodal patterns associated with a specific clinical domain while also discounting covariate effects. Simulations show that CR-mCCAR separate the reference and covariate factors accurately. Functional and structural neuroimaging data fusion reveals co-varying patterns in attention deficit/hyperactivity disorder (ADHD, striato-thalamo-cortical and salience areas) and in autism spectrum disorder (ASD, salience and fronto-temporal areas) that link with core symptoms but uncorrelate with age and motion. These results replicate in an independent cohort. Downstream classification accuracy between ADHD/ASD and controls is markedly higher for CR-mCCAR compared to fusion and regression separately. CR-mCCAR can be extended to include multiple targets and multiple covariates. Overall, results demonstrate CR-mCCAR can jointly optimize for target components that correlate with the reference(s) while removing nuisance covariates. This approach can improve the meaningful detection of reliable phenotype-linked multimodal biomarkers for brain disorders.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"5271-5284"},"PeriodicalIF":13.7000,"publicationDate":"2025-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11125858/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal fusion provides multiple benefits over single modality analysis by leveraging both shared and complementary information from different modalities. Notably, supervised fusion enjoys extensive interest for capturing multimodal co-varying patterns associated with clinical measures. A key challenge of brain data analysis is how to handle confounds, which, if unaddressed, can lead to an unrealistic description of the relationship between the brain and clinical measures. Current approaches often rely on linear regression to remove covariate effects prior to fusion, which may lead to information loss, rather than pursue the more global strategy of optimizing both fusion and covariates removal simultaneously. Thus, we propose “CR-mCCAR” to jointly optimize for confounds within a guided fusion model, capturing co-varying multimodal patterns associated with a specific clinical domain while also discounting covariate effects. Simulations show that CR-mCCAR separate the reference and covariate factors accurately. Functional and structural neuroimaging data fusion reveals co-varying patterns in attention deficit/hyperactivity disorder (ADHD, striato-thalamo-cortical and salience areas) and in autism spectrum disorder (ASD, salience and fronto-temporal areas) that link with core symptoms but uncorrelate with age and motion. These results replicate in an independent cohort. Downstream classification accuracy between ADHD/ASD and controls is markedly higher for CR-mCCAR compared to fusion and regression separately. CR-mCCAR can be extended to include multiple targets and multiple covariates. Overall, results demonstrate CR-mCCAR can jointly optimize for target components that correlate with the reference(s) while removing nuisance covariates. This approach can improve the meaningful detection of reliable phenotype-linked multimodal biomarkers for brain disorders.
混杂控制多模态神经影像数据融合及其在发育障碍中的应用。
通过利用来自不同模态的共享和互补信息,多模态融合提供了比单模态分析更多的好处。值得注意的是,监督融合在捕获与临床措施相关的多模态共变模式方面享有广泛的兴趣。大脑数据分析的一个关键挑战是如何处理混淆,如果不加以解决,可能导致对大脑和临床测量之间关系的不切实际的描述。目前的方法通常依赖于线性回归,在融合之前去除协变量效应,这可能导致信息丢失,而不是追求更全局的策略,同时优化融合和协变量去除。因此,我们提出“CR-mCCAR”在引导融合模型中共同优化混淆,捕获与特定临床领域相关的共变多模态模式,同时也贴现协变量效应。仿真结果表明,CR-mCCAR能准确地分离参考因子和协变量因子。功能和结构神经成像数据融合揭示了注意力缺陷/多动障碍(ADHD,纹状体-丘脑-皮层和突出区)和自闭症谱系障碍(ASD,突出区和额颞区)的共同变化模式,这些模式与核心症状有关,但与年龄和运动无关。这些结果在一个独立的队列中得到了重复。与单独融合和回归相比,CR-mCCAR在ADHD/ASD与对照组之间的下游分类准确率明显更高。CR-mCCAR可以扩展到包括多个目标和多个协变量。总体而言,结果表明CR-mCCAR可以在去除有害协变量的同时,对与参考相关的目标成分进行联合优化。这种方法可以提高对可靠的脑疾病表型相关多模态生物标志物的有意义检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信