独立于像素对齐数据的可靠多模态医学影像对图像翻译。

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2024-08-17 DOI:10.1002/mp.17362
Langrui Zhou, Guang Li
{"title":"独立于像素对齐数据的可靠多模态医学影像对图像翻译。","authors":"Langrui Zhou,&nbsp;Guang Li","doi":"10.1002/mp.17362","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>The current mainstream multi-modal medical image-to-image translation methods face a contradiction. Supervised methods with outstanding performance rely on pixel-wise aligned training data to constrain the model optimization. However, obtaining pixel-wise aligned multi-modal medical image datasets is challenging. Unsupervised methods can be trained without paired data, but their reliability cannot be guaranteed. At present, there is no ideal multi-modal medical image-to-image translation method that can generate reliable translation results without the need for pixel-wise aligned data.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>This work aims to develop a novel medical image-to-image translation model that is independent of pixel-wise aligned data (MITIA), enabling reliable multi-modal medical image-to-image translation under the condition of misaligned training data.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>The proposed MITIA model utilizes a prior extraction network composed of a multi-modal medical image registration module and a multi-modal misalignment error detection module to extract pixel-level prior information from training data with misalignment errors to the largest extent. The extracted prior information is then used to construct a regularization term to constrain the optimization of the unsupervised cycle-consistent Generative Adversarial Network model, restricting its solution space and thereby improving the performance and reliability of the generator. We trained the MITIA model using six datasets containing different misalignment errors and two well-aligned datasets. Subsequently, we conducted quantitative analysis using peak signal-to-noise ratio and structural similarity as metrics. Moreover, we compared the proposed method with six other state-of-the-art image-to-image translation methods.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The results of both quantitative analysis and qualitative visual inspection indicate that MITIA achieves superior performance compared to the competing state-of-the-art methods, both on misaligned data and aligned data. Furthermore, MITIA shows more stability in the presence of misalignment errors in the training data, regardless of their severity or type.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The proposed method achieves outstanding performance in multi-modal medical image-to-image translation tasks without aligned training data. Due to the difficulty in obtaining pixel-wise aligned data for medical image translation tasks, MITIA is expected to generate significant application value in this scenario compared to existing methods.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"51 11","pages":"8283-8301"},"PeriodicalIF":3.2000,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reliable multi-modal medical image-to-image translation independent of pixel-wise aligned data\",\"authors\":\"Langrui Zhou,&nbsp;Guang Li\",\"doi\":\"10.1002/mp.17362\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div>\\n \\n \\n <section>\\n \\n <h3> Background</h3>\\n \\n <p>The current mainstream multi-modal medical image-to-image translation methods face a contradiction. Supervised methods with outstanding performance rely on pixel-wise aligned training data to constrain the model optimization. However, obtaining pixel-wise aligned multi-modal medical image datasets is challenging. Unsupervised methods can be trained without paired data, but their reliability cannot be guaranteed. At present, there is no ideal multi-modal medical image-to-image translation method that can generate reliable translation results without the need for pixel-wise aligned data.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Purpose</h3>\\n \\n <p>This work aims to develop a novel medical image-to-image translation model that is independent of pixel-wise aligned data (MITIA), enabling reliable multi-modal medical image-to-image translation under the condition of misaligned training data.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Methods</h3>\\n \\n <p>The proposed MITIA model utilizes a prior extraction network composed of a multi-modal medical image registration module and a multi-modal misalignment error detection module to extract pixel-level prior information from training data with misalignment errors to the largest extent. The extracted prior information is then used to construct a regularization term to constrain the optimization of the unsupervised cycle-consistent Generative Adversarial Network model, restricting its solution space and thereby improving the performance and reliability of the generator. We trained the MITIA model using six datasets containing different misalignment errors and two well-aligned datasets. Subsequently, we conducted quantitative analysis using peak signal-to-noise ratio and structural similarity as metrics. Moreover, we compared the proposed method with six other state-of-the-art image-to-image translation methods.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Results</h3>\\n \\n <p>The results of both quantitative analysis and qualitative visual inspection indicate that MITIA achieves superior performance compared to the competing state-of-the-art methods, both on misaligned data and aligned data. Furthermore, MITIA shows more stability in the presence of misalignment errors in the training data, regardless of their severity or type.</p>\\n </section>\\n \\n <section>\\n \\n <h3> Conclusions</h3>\\n \\n <p>The proposed method achieves outstanding performance in multi-modal medical image-to-image translation tasks without aligned training data. Due to the difficulty in obtaining pixel-wise aligned data for medical image translation tasks, MITIA is expected to generate significant application value in this scenario compared to existing methods.</p>\\n </section>\\n </div>\",\"PeriodicalId\":18384,\"journal\":{\"name\":\"Medical physics\",\"volume\":\"51 11\",\"pages\":\"8283-8301\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medical physics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1002/mp.17362\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17362","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

摘要

背景:目前主流的多模态医学图像到图像翻译方法面临着一个矛盾。性能卓越的有监督方法依赖于像素对齐的训练数据来约束模型优化。然而,获得像素对齐的多模态医学图像数据集是一项挑战。无监督方法可以在没有配对数据的情况下进行训练,但其可靠性无法保证。目前,还没有一种理想的多模态医学图像到图像翻译方法,能在不需要像素对齐数据的情况下生成可靠的翻译结果。目的:本研究旨在开发一种独立于像素对齐数据的新型医学图像到图像翻译模型(MITIA),从而在训练数据不对齐的情况下实现可靠的多模态医学图像到图像翻译:方法:所提出的 MITIA 模型利用由多模态医学图像配准模块和多模态错配误差检测模块组成的先验提取网络,从错配误差最大的训练数据中提取像素级先验信息。然后利用提取的先验信息构建正则化项,约束无监督循环一致性生成对抗网络模型的优化,限制其解空间,从而提高生成器的性能和可靠性。我们使用六个包含不同不对齐误差的数据集和两个对齐良好的数据集训练了 MITIA 模型。随后,我们使用峰值信噪比和结构相似性作为指标进行了定量分析。此外,我们还将所提出的方法与其他六种最先进的图像到图像翻译方法进行了比较:定量分析和定性目视检查的结果表明,MITIA 在误对齐数据和对齐数据上的性能均优于其他同类先进方法。此外,MITIA 在训练数据出现对齐错误时表现出更高的稳定性,无论错误的严重程度或类型如何:结论:在没有对齐训练数据的情况下,所提出的方法在多模态医学图像到图像的翻译任务中表现出色。由于在医学图像翻译任务中很难获得像素对齐的数据,与现有方法相比,MITIA有望在这种情况下产生巨大的应用价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Reliable multi-modal medical image-to-image translation independent of pixel-wise aligned data

Background

The current mainstream multi-modal medical image-to-image translation methods face a contradiction. Supervised methods with outstanding performance rely on pixel-wise aligned training data to constrain the model optimization. However, obtaining pixel-wise aligned multi-modal medical image datasets is challenging. Unsupervised methods can be trained without paired data, but their reliability cannot be guaranteed. At present, there is no ideal multi-modal medical image-to-image translation method that can generate reliable translation results without the need for pixel-wise aligned data.

Purpose

This work aims to develop a novel medical image-to-image translation model that is independent of pixel-wise aligned data (MITIA), enabling reliable multi-modal medical image-to-image translation under the condition of misaligned training data.

Methods

The proposed MITIA model utilizes a prior extraction network composed of a multi-modal medical image registration module and a multi-modal misalignment error detection module to extract pixel-level prior information from training data with misalignment errors to the largest extent. The extracted prior information is then used to construct a regularization term to constrain the optimization of the unsupervised cycle-consistent Generative Adversarial Network model, restricting its solution space and thereby improving the performance and reliability of the generator. We trained the MITIA model using six datasets containing different misalignment errors and two well-aligned datasets. Subsequently, we conducted quantitative analysis using peak signal-to-noise ratio and structural similarity as metrics. Moreover, we compared the proposed method with six other state-of-the-art image-to-image translation methods.

Results

The results of both quantitative analysis and qualitative visual inspection indicate that MITIA achieves superior performance compared to the competing state-of-the-art methods, both on misaligned data and aligned data. Furthermore, MITIA shows more stability in the presence of misalignment errors in the training data, regardless of their severity or type.

Conclusions

The proposed method achieves outstanding performance in multi-modal medical image-to-image translation tasks without aligned training data. Due to the difficulty in obtaining pixel-wise aligned data for medical image translation tasks, MITIA is expected to generate significant application value in this scenario compared to existing methods.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信