Extended Capture Range of Rigid 2D/3D Registration by Estimating Riemannian Pose Gradients.

Wenhao Gu, Cong Gao, Robert Grupp, Javad Fotouhi, Mathias Unberath
{"title":"Extended Capture Range of Rigid 2D/3D Registration by Estimating Riemannian Pose Gradients.","authors":"Wenhao Gu,&nbsp;Cong Gao,&nbsp;Robert Grupp,&nbsp;Javad Fotouhi,&nbsp;Mathias Unberath","doi":"10.1007/978-3-030-59861-7_29","DOIUrl":null,"url":null,"abstract":"<p><p>Traditional intensity-based 2D/3D registration requires near-perfect initialization in order for image similarity metrics to yield meaningful updates of X-ray pose and reduce the likelihood of getting trapped in a local minimum. The conventional approaches strongly depend on image appearance rather than content, and therefore, fail in revealing large pose offsets that substantially alter the appearance of the same structure. We complement traditional similarity metrics with a convolutional neural network-based (CNN-based) registration solution that captures large-range pose relations by extracting both local and contextual information, yielding meaningful X-ray pose updates without the need for accurate initialization. To register a 2D X-ray image and a 3D CT scan, our CNN accepts a target X-ray image and a digitally reconstructed radiograph at the current pose estimate as input and iteratively outputs pose updates in the direction of the pose gradient on the Riemannian Manifold. Our approach integrates seamlessly with conventional image-based registration frameworks, where long-range relations are captured primarily by our CNN-based method while short-range offsets are recovered accurately with an image similarity-based method. On both synthetic and real X-ray images of the human pelvis, we demonstrate that the proposed method can successfully recover large rotational and translational offsets, irrespective of initialization.</p>","PeriodicalId":74092,"journal":{"name":"Machine learning in medical imaging. MLMI (Workshop)","volume":"12436 ","pages":"281-291"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7605345/pdf/nihms-1639752.pdf","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Machine learning in medical imaging. MLMI (Workshop)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/978-3-030-59861-7_29","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2020/9/29 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Traditional intensity-based 2D/3D registration requires near-perfect initialization in order for image similarity metrics to yield meaningful updates of X-ray pose and reduce the likelihood of getting trapped in a local minimum. The conventional approaches strongly depend on image appearance rather than content, and therefore, fail in revealing large pose offsets that substantially alter the appearance of the same structure. We complement traditional similarity metrics with a convolutional neural network-based (CNN-based) registration solution that captures large-range pose relations by extracting both local and contextual information, yielding meaningful X-ray pose updates without the need for accurate initialization. To register a 2D X-ray image and a 3D CT scan, our CNN accepts a target X-ray image and a digitally reconstructed radiograph at the current pose estimate as input and iteratively outputs pose updates in the direction of the pose gradient on the Riemannian Manifold. Our approach integrates seamlessly with conventional image-based registration frameworks, where long-range relations are captured primarily by our CNN-based method while short-range offsets are recovered accurately with an image similarity-based method. On both synthetic and real X-ray images of the human pelvis, we demonstrate that the proposed method can successfully recover large rotational and translational offsets, irrespective of initialization.

通过估计黎曼姿态梯度扩展刚性2D/3D配准的捕获范围。
传统的基于强度的2D/3D配准需要近乎完美的初始化,以便图像相似度量产生有意义的x射线姿势更新,并减少陷入局部最小值的可能性。传统的方法强烈地依赖于图像的外观而不是内容,因此,无法揭示大的姿势偏移,这实质上改变了相同结构的外观。我们用基于卷积神经网络(cnn)的配准解决方案来补充传统的相似性度量,该解决方案通过提取局部和上下文信息来捕获大范围的姿势关系,产生有意义的x射线姿势更新,而无需精确的初始化。为了注册2D x射线图像和3D CT扫描,我们的CNN接受目标x射线图像和当前姿态估计的数字重建x射线图像作为输入,并在黎曼流形上的姿态梯度方向迭代输出姿态更新。我们的方法与传统的基于图像的配准框架无缝集成,其中远程关系主要由我们基于cnn的方法捕获,而短程偏移则通过基于图像相似性的方法精确恢复。在人类骨盆的合成和真实x射线图像上,我们证明了所提出的方法可以成功地恢复大的旋转和平移偏移,而不需要初始化。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信