扩散加权图像合成的q空间引导多模态平移网络。

IF 9.8 1区 医学 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Pengli Zhu,Yingji Fu,Nanguang Chen,Anqi Qiu
{"title":"扩散加权图像合成的q空间引导多模态平移网络。","authors":"Pengli Zhu,Yingji Fu,Nanguang Chen,Anqi Qiu","doi":"10.1109/tmi.2025.3618683","DOIUrl":null,"url":null,"abstract":"Diffusion-weighted imaging (DWI) enables non-invasive characterization of tissue microstructure, yet acquiring densely sampled q-space data remains time-consuming and impractical in many clinical settings. Existing deep learning methods are typically constrained by fixed q-space sampling, limiting their adaptability to variable sampling scenarios. In this paper, we propose a Q-space Guided Multi-Modal Translation Network (Q-MMTN) for synthesizing multi-shell, high-angular resolution DWI (MS-HARDI) from flexible q-space sampling, leveraging commonly acquired structural data (e.g., T1- and T2-weighted MRI). Q-MMTN integrates the hybrid encoder and multi-modal attention fusion mechanism to effectively extract both local and global complementary information from multiple modalities. This design enhances feature representation and, together with a flexible q-space-aware embedding, enables dynamic modulation of internal features without relying on fixed sampling schemes. Additionally, we introduce a set of task-specific constraints, including adversarial, reconstruction, and anatomical consistency losses, which jointly enforce anatomical fidelity and signal realism. These constraints guide Q-MMTN to accurately capture the intrinsic and nonlinear relationships between directional DWI signals and q-space information. Extensive experiments across four lifespan datasets of children, adolescents, young and older adults demonstrate that Q-MMTN outperforms existing methods, including 1D-qDL, 2D-qDL, MESC-SD, and Q-GAN in estimating parameter maps and fiber tracts with fine-grained anatomical details. Notably, its ability to accommodate flexible q-space sampling highlights its potential as a promising toolkit for clinical and research applications. Our code is available at https://github.com/Idea89560041/Q-MMTN.","PeriodicalId":13418,"journal":{"name":"IEEE Transactions on Medical Imaging","volume":"33 1","pages":""},"PeriodicalIF":9.8000,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Q-space Guided Multi-Modal Translation Network for Diffusion-Weighted Image Synthesis.\",\"authors\":\"Pengli Zhu,Yingji Fu,Nanguang Chen,Anqi Qiu\",\"doi\":\"10.1109/tmi.2025.3618683\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Diffusion-weighted imaging (DWI) enables non-invasive characterization of tissue microstructure, yet acquiring densely sampled q-space data remains time-consuming and impractical in many clinical settings. Existing deep learning methods are typically constrained by fixed q-space sampling, limiting their adaptability to variable sampling scenarios. In this paper, we propose a Q-space Guided Multi-Modal Translation Network (Q-MMTN) for synthesizing multi-shell, high-angular resolution DWI (MS-HARDI) from flexible q-space sampling, leveraging commonly acquired structural data (e.g., T1- and T2-weighted MRI). Q-MMTN integrates the hybrid encoder and multi-modal attention fusion mechanism to effectively extract both local and global complementary information from multiple modalities. This design enhances feature representation and, together with a flexible q-space-aware embedding, enables dynamic modulation of internal features without relying on fixed sampling schemes. Additionally, we introduce a set of task-specific constraints, including adversarial, reconstruction, and anatomical consistency losses, which jointly enforce anatomical fidelity and signal realism. These constraints guide Q-MMTN to accurately capture the intrinsic and nonlinear relationships between directional DWI signals and q-space information. Extensive experiments across four lifespan datasets of children, adolescents, young and older adults demonstrate that Q-MMTN outperforms existing methods, including 1D-qDL, 2D-qDL, MESC-SD, and Q-GAN in estimating parameter maps and fiber tracts with fine-grained anatomical details. Notably, its ability to accommodate flexible q-space sampling highlights its potential as a promising toolkit for clinical and research applications. Our code is available at https://github.com/Idea89560041/Q-MMTN.\",\"PeriodicalId\":13418,\"journal\":{\"name\":\"IEEE Transactions on Medical Imaging\",\"volume\":\"33 1\",\"pages\":\"\"},\"PeriodicalIF\":9.8000,\"publicationDate\":\"2025-10-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Medical Imaging\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://doi.org/10.1109/tmi.2025.3618683\",\"RegionNum\":1,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Medical Imaging","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/tmi.2025.3618683","RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

摘要

弥散加权成像(DWI)能够无创地表征组织微观结构,但在许多临床环境中,获取密集采样的q空间数据仍然是耗时且不切实际的。现有的深度学习方法通常受到固定q空间采样的约束,限制了它们对可变采样场景的适应性。在本文中,我们提出了一个q空间引导的多模态平移网络(Q-MMTN),用于从灵活的q空间采样中合成多壳,高角度分辨率DWI (MS-HARDI),利用常用的结构数据(例如,T1和t2加权MRI)。Q-MMTN集成了混合编码器和多模态注意融合机制,有效地从多模态中提取局部和全局互补信息。该设计增强了特征表示,并与灵活的q空间感知嵌入一起,实现了内部特征的动态调制,而不依赖于固定的采样方案。此外,我们引入了一组特定于任务的约束,包括对抗性、重建和解剖一致性损失,这些约束共同增强了解剖保真度和信号真实感。这些约束指导Q-MMTN准确捕捉定向DWI信号与q空间信息之间的内在和非线性关系。针对儿童、青少年、年轻人和老年人的四种寿命数据集进行的广泛实验表明,Q-MMTN在估计参数图和具有细粒度解剖细节的纤维束方面优于现有方法,包括1D-qDL、2D-qDL、MESC-SD和Q-GAN。值得注意的是,其适应灵活q空间采样的能力突出了其作为临床和研究应用的有前途的工具包的潜力。我们的代码可在https://github.com/Idea89560041/Q-MMTN上获得。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Q-space Guided Multi-Modal Translation Network for Diffusion-Weighted Image Synthesis.
Diffusion-weighted imaging (DWI) enables non-invasive characterization of tissue microstructure, yet acquiring densely sampled q-space data remains time-consuming and impractical in many clinical settings. Existing deep learning methods are typically constrained by fixed q-space sampling, limiting their adaptability to variable sampling scenarios. In this paper, we propose a Q-space Guided Multi-Modal Translation Network (Q-MMTN) for synthesizing multi-shell, high-angular resolution DWI (MS-HARDI) from flexible q-space sampling, leveraging commonly acquired structural data (e.g., T1- and T2-weighted MRI). Q-MMTN integrates the hybrid encoder and multi-modal attention fusion mechanism to effectively extract both local and global complementary information from multiple modalities. This design enhances feature representation and, together with a flexible q-space-aware embedding, enables dynamic modulation of internal features without relying on fixed sampling schemes. Additionally, we introduce a set of task-specific constraints, including adversarial, reconstruction, and anatomical consistency losses, which jointly enforce anatomical fidelity and signal realism. These constraints guide Q-MMTN to accurately capture the intrinsic and nonlinear relationships between directional DWI signals and q-space information. Extensive experiments across four lifespan datasets of children, adolescents, young and older adults demonstrate that Q-MMTN outperforms existing methods, including 1D-qDL, 2D-qDL, MESC-SD, and Q-GAN in estimating parameter maps and fiber tracts with fine-grained anatomical details. Notably, its ability to accommodate flexible q-space sampling highlights its potential as a promising toolkit for clinical and research applications. Our code is available at https://github.com/Idea89560041/Q-MMTN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
IEEE Transactions on Medical Imaging
IEEE Transactions on Medical Imaging 医学-成像科学与照相技术
CiteScore
21.80
自引率
5.70%
发文量
637
审稿时长
5.6 months
期刊介绍: The IEEE Transactions on Medical Imaging (T-MI) is a journal that welcomes the submission of manuscripts focusing on various aspects of medical imaging. The journal encourages the exploration of body structure, morphology, and function through different imaging techniques, including ultrasound, X-rays, magnetic resonance, radionuclides, microwaves, and optical methods. It also promotes contributions related to cell and molecular imaging, as well as all forms of microscopy. T-MI publishes original research papers that cover a wide range of topics, including but not limited to novel acquisition techniques, medical image processing and analysis, visualization and performance, pattern recognition, machine learning, and other related methods. The journal particularly encourages highly technical studies that offer new perspectives. By emphasizing the unification of medicine, biology, and imaging, T-MI seeks to bridge the gap between instrumentation, hardware, software, mathematics, physics, biology, and medicine by introducing new analysis methods. While the journal welcomes strong application papers that describe novel methods, it directs papers that focus solely on important applications using medically adopted or well-established methods without significant innovation in methodology to other journals. T-MI is indexed in Pubmed® and Medline®, which are products of the United States National Library of Medicine.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信