Multimodal deformable registration based on unsupervised learning

Q3 Engineering
T. Ma, Z. Li, R. Liu, X. Fan, Z. Luo
{"title":"Multimodal deformable registration based on unsupervised learning","authors":"T. Ma, Z. Li, R. Liu, X. Fan, Z. Luo","doi":"10.13700/J.BH.1001-5965.2020.0449","DOIUrl":null,"url":null,"abstract":"Multimodal deformable registration is designed to solve dense spatial transformations and is used to align images of two different modalities It is a key issue in many medical image analysis applications Multimodal image registration based on traditional methods aims to solve the optimization problem of each pair of images, and usually achieves excellent registration performance, but the calculation cost is high and the running time is long The deep learning method greatly reduces the running time by learning the network used to perform registration These learning-based methods are very effective for single-modality registration However, the intensity distribution of different modal images is unknown and complex Most existing methods rely heavily on label data Faced with these challenges, this paper proposes a deep multimodal registration framework based on unsupervised learning Specifically, the framework consists of feature learning based on matching amount and deformation field learning based on maximum posterior probability, and realizes unsupervised training by means of spatial conversion function and differentiable mutual information loss function In the 3D image registration tasks of MRI T1, MRI T2 and CT, the proposed method is compared with the existing advanced multi-modal registration methods In addition, the registration performance of the proposed method is demonstrated on the latest COVID-19 CT data A large number of results show that the proposed method has a competitive advantage in registration accuracy compared with other methods, and greatly reduces the calculation time © 2021, Editorial Board of JBUAA All right reserved","PeriodicalId":39840,"journal":{"name":"北京航空航天大学学报","volume":"47 1","pages":"658-664"},"PeriodicalIF":0.0000,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"北京航空航天大学学报","FirstCategoryId":"1087","ListUrlMain":"https://doi.org/10.13700/J.BH.1001-5965.2020.0449","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Engineering","Score":null,"Total":0}
引用次数: 0

Abstract

Multimodal deformable registration is designed to solve dense spatial transformations and is used to align images of two different modalities It is a key issue in many medical image analysis applications Multimodal image registration based on traditional methods aims to solve the optimization problem of each pair of images, and usually achieves excellent registration performance, but the calculation cost is high and the running time is long The deep learning method greatly reduces the running time by learning the network used to perform registration These learning-based methods are very effective for single-modality registration However, the intensity distribution of different modal images is unknown and complex Most existing methods rely heavily on label data Faced with these challenges, this paper proposes a deep multimodal registration framework based on unsupervised learning Specifically, the framework consists of feature learning based on matching amount and deformation field learning based on maximum posterior probability, and realizes unsupervised training by means of spatial conversion function and differentiable mutual information loss function In the 3D image registration tasks of MRI T1, MRI T2 and CT, the proposed method is compared with the existing advanced multi-modal registration methods In addition, the registration performance of the proposed method is demonstrated on the latest COVID-19 CT data A large number of results show that the proposed method has a competitive advantage in registration accuracy compared with other methods, and greatly reduces the calculation time © 2021, Editorial Board of JBUAA All right reserved
基于无监督学习的多模态可变形配准
多模态可变形配准是为了解决密集的空间变换而设计的,用于对齐两种不同模态的图像。这是许多医学图像分析应用中的一个关键问题。基于传统方法的多模态图像配准旨在解决每对图像的优化问题,通常会获得优异的配准性能,但是计算成本高且运行时间长。深度学习方法通过学习用于执行注册的网络大大减少了运行时间。这些基于学习的方法对于单模态注册非常有效。然而,不同模态图像的强度分布是未知和复杂的。现有的方法大多严重依赖于标签数据。面对这些挑战,本文提出了一种基于无监督学习的深度多模态配准框架。具体而言,该框架由基于匹配量的特征学习和基于最大后验概率的变形场学习组成,并通过空间转换函数和可微互信息损失函数实现了无监督训练。在MRI T1、MRI T2和CT的三维图像配准任务中,将所提出的方法与现有的先进多模态配准方法进行了比较。此外,在最新的新冠肺炎CT数据上演示了所提出的方法的配准性能。大量结果表明,与其他方法相比,所提出的算法在配准精度方面具有竞争优势,并大大减少了计算时间©2021,JBUAA编委会保留所有权利
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
北京航空航天大学学报
北京航空航天大学学报 Engineering-Aerospace Engineering
CiteScore
1.50
自引率
0.00%
发文量
8537
期刊介绍:
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信