Encoding matching criteria for cross-domain deformable image registration

IF 3.2 2区 医学 Q1 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
Medical physics Pub Date : 2024-12-17 DOI:10.1002/mp.17565
Zhuoyuan Wang, Haiqiao Wang, Dong Ni, Ming Xu, Yi Wang
{"title":"Encoding matching criteria for cross-domain deformable image registration","authors":"Zhuoyuan Wang,&nbsp;Haiqiao Wang,&nbsp;Dong Ni,&nbsp;Ming Xu,&nbsp;Yi Wang","doi":"10.1002/mp.17565","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>Most existing deep learning-based registration methods are trained on single-type images to address same-domain tasks, resulting in performance degradation when applied to new scenarios. Retraining a model for new scenarios requires extra time and data. Therefore, efficient and accurate solutions for cross-domain deformable registration are in demand.</p>\n </section>\n \n <section>\n \n <h3> Purpose</h3>\n \n <p>We argue that the tailor-made matching criteria in traditional registration methods is one of the main reason they are applicable in different domains. Motivated by this, we devise a registration-oriented encoder to model the matching criteria of image features and structural features, which is beneficial to boost registration accuracy and adaptability.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>Specifically, a general feature encoder (Encoder-G) is proposed to capture comprehensive medical image features, while a structural feature encoder (Encoder-S) is designed to encode the structural self-similarity into the global representation. Moreover, by updating Encoder-S using one-shot learning, our method can effectively adapt to different domains. The efficacy of our method is evaluated using MRI images from three different domains, including brain images (training/testing: 870/90 pairs), abdomen images (training/testing: 1406/90 pairs), and cardiac images (training/testing: 64770/870 pairs). The comparison methods include traditional method (SyN) and cutting-edge deep networks. The evaluation metrics contain dice similarity coefficient (DSC) and average symmetric surface distance (ASSD).</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>In the single-domain task, our method attains an average DSC of 68.9%/65.2%/72.8%, and ASSD of 9.75/3.82/1.30 mm on abdomen/cardiac/brain images, outperforming the second-best comparison methods by large margins. In the cross-domain task, without one-shot optimization, our method outperforms other deep networks in five out of six cross-domain scenarios and even surpasses symmetric image normalization method (SyN) in two scenarios. By conducting the one-shot optimization, our method successfully surpasses SyN in all six cross-domain scenarios.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>Our method yields favorable results in the single-domain task while ensuring improved generalization and adaptation performance in the cross-domain task, showing its feasibility for the challenging cross-domain registration applications. The code is publicly available at https://github.com/JuliusWang-7/EncoderReg.</p>\n </section>\n </div>","PeriodicalId":18384,"journal":{"name":"Medical physics","volume":"52 4","pages":"2305-2315"},"PeriodicalIF":3.2000,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medical physics","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/mp.17565","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Background

Most existing deep learning-based registration methods are trained on single-type images to address same-domain tasks, resulting in performance degradation when applied to new scenarios. Retraining a model for new scenarios requires extra time and data. Therefore, efficient and accurate solutions for cross-domain deformable registration are in demand.

Purpose

We argue that the tailor-made matching criteria in traditional registration methods is one of the main reason they are applicable in different domains. Motivated by this, we devise a registration-oriented encoder to model the matching criteria of image features and structural features, which is beneficial to boost registration accuracy and adaptability.

Methods

Specifically, a general feature encoder (Encoder-G) is proposed to capture comprehensive medical image features, while a structural feature encoder (Encoder-S) is designed to encode the structural self-similarity into the global representation. Moreover, by updating Encoder-S using one-shot learning, our method can effectively adapt to different domains. The efficacy of our method is evaluated using MRI images from three different domains, including brain images (training/testing: 870/90 pairs), abdomen images (training/testing: 1406/90 pairs), and cardiac images (training/testing: 64770/870 pairs). The comparison methods include traditional method (SyN) and cutting-edge deep networks. The evaluation metrics contain dice similarity coefficient (DSC) and average symmetric surface distance (ASSD).

Results

In the single-domain task, our method attains an average DSC of 68.9%/65.2%/72.8%, and ASSD of 9.75/3.82/1.30 mm on abdomen/cardiac/brain images, outperforming the second-best comparison methods by large margins. In the cross-domain task, without one-shot optimization, our method outperforms other deep networks in five out of six cross-domain scenarios and even surpasses symmetric image normalization method (SyN) in two scenarios. By conducting the one-shot optimization, our method successfully surpasses SyN in all six cross-domain scenarios.

Conclusions

Our method yields favorable results in the single-domain task while ensuring improved generalization and adaptation performance in the cross-domain task, showing its feasibility for the challenging cross-domain registration applications. The code is publicly available at https://github.com/JuliusWang-7/EncoderReg.

跨域可变形图像配准的编码匹配标准
背景:大多数现有的基于深度学习的配准方法都是在单一类型的图像上进行训练,以解决相同领域的任务,结果在应用于新场景时性能下降。针对新场景重新训练模型需要额外的时间和数据。因此,我们需要高效、准确的跨领域可变形配准解决方案。目的:我们认为,传统配准方法中量身定制的匹配标准是其适用于不同领域的主要原因之一。受此启发,我们设计了一种面向配准的编码器来模拟图像特征和结构特征的匹配标准,这有利于提高配准精度和适应性:具体来说,我们提出了一个通用特征编码器(Encoder-G)来捕捉全面的医学图像特征,同时设计了一个结构特征编码器(Encoder-S)来将结构自相似性编码到全局表示中。此外,通过使用单次学习更新编码器-S,我们的方法可以有效地适应不同领域。我们使用三个不同领域的核磁共振成像图像评估了我们方法的有效性,包括大脑图像(训练/测试:870/90 对)、腹部图像(训练/测试:1406/90 对)和心脏图像(训练/测试:64770/870 对)。比较方法包括传统方法(SyN)和前沿深度网络。评价指标包括骰子相似系数(DSC)和平均对称面距离(ASSD):在单域任务中,我们的方法在腹部/心脏/大脑图像上的平均 DSC 为 68.9%/65.2%/72.8%,平均 ASSD 为 9.75/3.82/1.30 mm,大大优于排名第二的比较方法。在跨域任务中,如果不进行一次优化,我们的方法在六个跨域场景中的五个场景中都优于其他深度网络,甚至在两个场景中超过了对称图像归一化方法(SyN)。通过进行一次优化,我们的方法在所有六个跨域场景中都成功超越了对称图像归一化方法(SyN):结论:我们的方法在单域任务中取得了良好的结果,同时确保在跨域任务中提高泛化和适应性能,这表明它在具有挑战性的跨域注册应用中是可行的。代码可在 https://github.com/JuliusWang-7/EncoderReg 公开获取。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Medical physics
Medical physics 医学-核医学
CiteScore
6.80
自引率
15.80%
发文量
660
审稿时长
1.7 months
期刊介绍: Medical Physics publishes original, high impact physics, imaging science, and engineering research that advances patient diagnosis and therapy through contributions in 1) Basic science developments with high potential for clinical translation 2) Clinical applications of cutting edge engineering and physics innovations 3) Broadly applicable and innovative clinical physics developments Medical Physics is a journal of global scope and reach. By publishing in Medical Physics your research will reach an international, multidisciplinary audience including practicing medical physicists as well as physics- and engineering based translational scientists. We work closely with authors of promising articles to improve their quality.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信