An hetero-modal deep learning framework for medical image synthesis applied to contrast and non-contrast MRI.

IF 1.3 Q3 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Daniel Gourdeau, Simon Duchesne, Louis Archambault
{"title":"An hetero-modal deep learning framework for medical image synthesis applied to contrast and non-contrast MRI.","authors":"Daniel Gourdeau, Simon Duchesne, Louis Archambault","doi":"10.1088/2057-1976/ad72f9","DOIUrl":null,"url":null,"abstract":"<p><p>Some pathologies such as cancer and dementia require multiple imaging modalities to fully diagnose and assess the extent of the disease. Magnetic resonance imaging offers this kind of polyvalence, but examinations take time and can require contrast agent injection. The flexible synthesis of these imaging sequences based on the available ones for a given patient could help reduce scan times or circumvent the need for contrast agent injection. In this work, we propose a deep learning architecture that can perform the synthesis of all missing imaging sequences from any subset of available images. The network is trained adversarially, with the generator consisting of parallel 3D U-Net encoders and decoders that optimally combines their multi-resolution representations with a fusion operation learned by an attention network trained conjointly with the generator network. We compare our synthesis performance with 3D networks using other types of fusion and a comparable number of trainable parameters, such as the mean/variance fusion. In all synthesis scenarios except one, the synthesis performance of the network using attention-guided fusion was better than the other fusion schemes. We also inspect the encoded representations and the attention network outputs to gain insights into the synthesis process, and uncover desirable behaviors such as prioritization of specific modalities, flexible construction of the representation when important modalities are missing, and modalities being selected in regions where they carry sequence-specific information. This work suggests that a better construction of the latent representation space in hetero-modal networks can be achieved by using an attention network.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Physics & Engineering Express","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1088/2057-1976/ad72f9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Some pathologies such as cancer and dementia require multiple imaging modalities to fully diagnose and assess the extent of the disease. Magnetic resonance imaging offers this kind of polyvalence, but examinations take time and can require contrast agent injection. The flexible synthesis of these imaging sequences based on the available ones for a given patient could help reduce scan times or circumvent the need for contrast agent injection. In this work, we propose a deep learning architecture that can perform the synthesis of all missing imaging sequences from any subset of available images. The network is trained adversarially, with the generator consisting of parallel 3D U-Net encoders and decoders that optimally combines their multi-resolution representations with a fusion operation learned by an attention network trained conjointly with the generator network. We compare our synthesis performance with 3D networks using other types of fusion and a comparable number of trainable parameters, such as the mean/variance fusion. In all synthesis scenarios except one, the synthesis performance of the network using attention-guided fusion was better than the other fusion schemes. We also inspect the encoded representations and the attention network outputs to gain insights into the synthesis process, and uncover desirable behaviors such as prioritization of specific modalities, flexible construction of the representation when important modalities are missing, and modalities being selected in regions where they carry sequence-specific information. This work suggests that a better construction of the latent representation space in hetero-modal networks can be achieved by using an attention network.

应用于对比和非对比核磁共振成像的医学图像合成的异模式深度学习框架。
有些病症,如癌症和痴呆症,需要通过多种成像方式来全面诊断和评估疾病的程度。磁共振成像提供了这种多值性,但检查需要时间,而且可能需要注射造影剂。根据特定患者的现有情况灵活合成这些成像序列,有助于缩短扫描时间或避免注射造影剂。在这项工作中,我们提出了一种深度学习架构,可以从任何可用图像子集合成所有缺失的成像序列。该网络采用对抗式训练,生成器由并行三维 U-Net 编码器和解码器组成,可将其多分辨率表示与由与生成器网络共同训练的注意力网络学习的融合操作进行优化组合。我们将我们的合成性能与使用其他融合类型和可训练参数数量相当的三维网络(如均值/方差融合)进行比较。除一种情况外,在所有合成情况下,使用注意力引导融合的网络的合成性能都优于其他融合方案。我们还检查了编码表征和注意力网络输出,以深入了解合成过程,并发现了一些理想的行为,如特定模态的优先级、重要模态缺失时表征的灵活构建,以及在携带特定序列信息的区域选择模态。这项研究表明,使用注意力网络可以更好地构建异模式网络中的潜在表征空间。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Biomedical Physics & Engineering Express
Biomedical Physics & Engineering Express RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
2.80
自引率
0.00%
发文量
153
期刊介绍: BPEX is an inclusive, international, multidisciplinary journal devoted to publishing new research on any application of physics and/or engineering in medicine and/or biology. Characterized by a broad geographical coverage and a fast-track peer-review process, relevant topics include all aspects of biophysics, medical physics and biomedical engineering. Papers that are almost entirely clinical or biological in their focus are not suitable. The journal has an emphasis on publishing interdisciplinary work and bringing research fields together, encompassing experimental, theoretical and computational work.
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信