Unpaired Dual-Modal Image Complementation Learning for Single-Modal Medical Image Segmentation.

IF 4.4 2区 医学 Q2 ENGINEERING, BIOMEDICAL
Dehui Xiang, Tao Peng, Yun Bian, Lang Chen, Jianbin Zeng, Fei Shi, Weifang Zhu, Xinjian Chen
{"title":"Unpaired Dual-Modal Image Complementation Learning for Single-Modal Medical Image Segmentation.","authors":"Dehui Xiang, Tao Peng, Yun Bian, Lang Chen, Jianbin Zeng, Fei Shi, Weifang Zhu, Xinjian Chen","doi":"10.1109/TBME.2024.3467216","DOIUrl":null,"url":null,"abstract":"<p><strong>Objective: </strong>Multi-modal MR/CT image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to acquire aligned multi-modal images of a patient in clinical practice due to the high cost and specific allergic reactions to contrast agents. To address these issues, a task complementation framework is proposed to enable unpaired multi-modal image complementation learning in the training stage and single-modal image segmentation in the inference stage.</p><p><strong>Method: </strong>To fuse unpaired dual-modal images in the training stage and allow single-modal image segmentation in the inference stage, a synthesis-segmentation task complementation network is constructed to mutually facilitate cross-modal image synthesis and segmentation since the same content feature can be used to perform the image segmentation task and image synthesis task. To maintain the consistency of the target organ with varied shapes, a curvature consistency loss is proposed to align the segmentation predictions of the original image and the cross-modal synthesized image. To segment the small lesions or substructures, a regression-segmentation task complementation network is constructed to utilize the auxiliary feature of the target organ.</p><p><strong>Results: </strong>Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.</p><p><strong>Conclusion: </strong>The proposed method can fuse dual-modal CT/MR images in the training stage and only needs single-modal CT/MR images in the inference stage.</p><p><strong>Significance: </strong>The proposed method can be used in routine clinical occasions when only single-modal CT/MR image is available for a patient.</p>","PeriodicalId":13245,"journal":{"name":"IEEE Transactions on Biomedical Engineering","volume":"PP ","pages":""},"PeriodicalIF":4.4000,"publicationDate":"2024-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Biomedical Engineering","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1109/TBME.2024.3467216","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

Objective: Multi-modal MR/CT image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to acquire aligned multi-modal images of a patient in clinical practice due to the high cost and specific allergic reactions to contrast agents. To address these issues, a task complementation framework is proposed to enable unpaired multi-modal image complementation learning in the training stage and single-modal image segmentation in the inference stage.

Method: To fuse unpaired dual-modal images in the training stage and allow single-modal image segmentation in the inference stage, a synthesis-segmentation task complementation network is constructed to mutually facilitate cross-modal image synthesis and segmentation since the same content feature can be used to perform the image segmentation task and image synthesis task. To maintain the consistency of the target organ with varied shapes, a curvature consistency loss is proposed to align the segmentation predictions of the original image and the cross-modal synthesized image. To segment the small lesions or substructures, a regression-segmentation task complementation network is constructed to utilize the auxiliary feature of the target organ.

Results: Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.

Conclusion: The proposed method can fuse dual-modal CT/MR images in the training stage and only needs single-modal CT/MR images in the inference stage.

Significance: The proposed method can be used in routine clinical occasions when only single-modal CT/MR image is available for a patient.

用于单模态医学图像分割的非配对双模态图像互补学习
目的:多模态 MR/CT 图像分割是疾病诊断和治疗中的一项重要任务,但在临床实践中,由于成本高昂和对造影剂的特殊过敏反应,通常很难获取患者的对齐多模态图像。为了解决这些问题,我们提出了一个任务互补框架,在训练阶段实现非配对多模态图像互补学习,在推理阶段实现单模态图像分割:方法:为了在训练阶段融合未配对的双模态图像,并在推理阶段进行单模态图像分割,我们构建了一个合成-分割任务互补网络,以相互促进跨模态图像合成和分割,因为相同的内容特征可用于执行图像分割任务和图像合成任务。为了保持形状各异的目标器官的一致性,提出了曲率一致性损失,以调整原始图像和跨模态合成图像的分割预测。为了分割小的病变或亚结构,利用目标器官的辅助特征构建了回归-分割任务互补网络:利用内部数据集和公开数据集进行了综合实验。实验结果表明,我们的框架优于最先进的方法:结论:提出的方法可以在训练阶段融合双模态 CT/MR 图像,在推理阶段只需要单模态 CT/MR 图像:意义:当患者只有单模态 CT/MR 图像时,所提出的方法可用于常规临床场合。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Biomedical Engineering
IEEE Transactions on Biomedical Engineering 工程技术-工程:生物医学
CiteScore
9.40
自引率
4.30%
发文量
880
审稿时长
2.5 months
期刊介绍: IEEE Transactions on Biomedical Engineering contains basic and applied papers dealing with biomedical engineering. Papers range from engineering development in methods and techniques with biomedical applications to experimental and clinical investigations with engineering contributions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信