LCNet: A Robust and Accurate Non-Rigid 3-D Point Set Registration Approach for Image-Guided Liver Surgery

IF 3.8 Q2 ENGINEERING, BIOMEDICAL
Mingyang Liu;Geng Li;Hao Yu;Rui Song;Yibin Li;Max Q.-H. Meng;Zhe Min
{"title":"LCNet: A Robust and Accurate Non-Rigid 3-D Point Set Registration Approach for Image-Guided Liver Surgery","authors":"Mingyang Liu;Geng Li;Hao Yu;Rui Song;Yibin Li;Max Q.-H. Meng;Zhe Min","doi":"10.1109/TMRB.2025.3573420","DOIUrl":null,"url":null,"abstract":"In this paper, we propose a novel unsupervised learning-based non-rigid 3D point set registration method, Learning Coherent Point Drift Network (LCNet), for image-guided liver surgery. We reformulate the classical probabilistic registration approach, i.e., Coherent Point Drift (CPD) into a learning-based paradigm. We first utilise the feature extraction module (FEM) to extract the features of two original point sets, which are robust to rigid transformation. Subsequently, we establish reliable correspondences between the point sets using the optimal transport (OT) module by leveraging both original points and learned features. Then, rather than directly regressing displacement vectors, we compute the displacements by solving the involved matrix equation in the transformation module, where the point localization noise is explicitly considered. In addition, we present three variants of the proposed approach, i.e., LCNet, LCNet-ED and LCNet-WD. Among these, LCNet outperforms the other two, demonstrating the superiority of the Chamfer loss. We have extensively evaluated LCNet on the simulated and real datasets. Under experimental conditions with the rotation angle lies in the range of <inline-formula> <tex-math>$[{-}45^{\\circ },45^{\\circ }]$ </tex-math></inline-formula> and the translation in the range of <inline-formula> <tex-math>$[{-}30 mm, 30 mm]$ </tex-math></inline-formula>, LCNet achieves the root-mean-square-error (rmse) value being 3.46 mm on the MedShapeNet dataset, while those using CPD and RoITr are 7.65 mm <inline-formula> <tex-math>$(p\\lt 0.001)$ </tex-math></inline-formula> and 6.71 mm <inline-formula> <tex-math>$(p\\lt 0.001)$ </tex-math></inline-formula> respectively. Experimental results show that LCNet exhibits significant improvements over existing state-of-the-art registration methods and shed light on its promising use in image-guided liver surgery.","PeriodicalId":73318,"journal":{"name":"IEEE transactions on medical robotics and bionics","volume":"7 3","pages":"1073-1086"},"PeriodicalIF":3.8000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on medical robotics and bionics","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11015600/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we propose a novel unsupervised learning-based non-rigid 3D point set registration method, Learning Coherent Point Drift Network (LCNet), for image-guided liver surgery. We reformulate the classical probabilistic registration approach, i.e., Coherent Point Drift (CPD) into a learning-based paradigm. We first utilise the feature extraction module (FEM) to extract the features of two original point sets, which are robust to rigid transformation. Subsequently, we establish reliable correspondences between the point sets using the optimal transport (OT) module by leveraging both original points and learned features. Then, rather than directly regressing displacement vectors, we compute the displacements by solving the involved matrix equation in the transformation module, where the point localization noise is explicitly considered. In addition, we present three variants of the proposed approach, i.e., LCNet, LCNet-ED and LCNet-WD. Among these, LCNet outperforms the other two, demonstrating the superiority of the Chamfer loss. We have extensively evaluated LCNet on the simulated and real datasets. Under experimental conditions with the rotation angle lies in the range of $[{-}45^{\circ },45^{\circ }]$ and the translation in the range of $[{-}30 mm, 30 mm]$ , LCNet achieves the root-mean-square-error (rmse) value being 3.46 mm on the MedShapeNet dataset, while those using CPD and RoITr are 7.65 mm $(p\lt 0.001)$ and 6.71 mm $(p\lt 0.001)$ respectively. Experimental results show that LCNet exhibits significant improvements over existing state-of-the-art registration methods and shed light on its promising use in image-guided liver surgery.
LCNet:一种用于图像引导肝脏手术的鲁棒、精确的非刚性三维点集配准方法
在本文中,我们提出了一种新的基于无监督学习的非刚性三维点集配准方法——学习相干点漂移网络(LCNet),用于图像引导肝脏手术。我们将经典的概率配准方法,即相干点漂移(CPD)重新表述为基于学习的范式。首先利用特征提取模块(FEM)提取两个原始点集的特征,这些特征对刚性变换具有鲁棒性;随后,我们利用最优传输(OT)模块利用原始点和学习特征在点集之间建立可靠的对应关系。然后,我们不是直接回归位移向量,而是通过求解变换模块中所涉及的矩阵方程来计算位移,其中明确考虑了点定位噪声。此外,我们提出了该方法的三种变体,即LCNet, LCNet- ed和LCNet- wd。其中,LCNet的性能优于其他两种,说明了Chamfer损耗的优越性。我们在模拟和真实数据集上对LCNet进行了广泛的评估。在旋转角度为$[{-}45^{\circ},45^{\circ}]$,平移角度为$[{-}30 mm, 30 mm]$的实验条件下,LCNet在MedShapeNet数据集上获得的均方根误差(rmse)值为3.46 mm,而使用CPD和RoITr的rmse值分别为7.65 mm $(p\lt 0.001)$和6.71 mm $(p\lt 0.001)$。实验结果表明,LCNet比现有的最先进的注册方法有了显著的改进,并揭示了其在图像引导肝脏手术中的应用前景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信