{"title":"2D/3D fast fine registration in minimally invasive pelvic surgery","authors":"Fujiao Ju , Yuan Li , Jingxin Zhao , Mingjie Dong","doi":"10.1016/j.bspc.2024.107145","DOIUrl":null,"url":null,"abstract":"<div><div>The 2D/3D rigid registration between preoperative 3D CT and intraoperative 2D X-ray is a crucial step in minimally invasive pelvic surgery. The deep learning-based 2D/3D registration methods address the inefficiencies of traditional approaches. However, the wide range of spatial transformation parameters and other complexities pose significant challenges for achieving accurate registration in a single step. Additionally, the stylistic differences between Digitally Reconstructed Radiographs (DRRs) used in training and real X-ray images limit the practical applicability of most methods. To overcome these challenges, we propose a 2D/3D fast registration framework comprising a coarse registration network, fine registration based on key point tracking and alignment, and domain adaptation. Coarse registration using plug-and-play attention is introduced to preliminarily estimate transformation parameters. Then we design a key point tracking network to match key points between different images, and leverage points alignment to achieve fine registration. To address the stylistic differences between DRR and X-ray images, we investigate a domain adaptation network. The experiments were conducted on DRR and X-ray images, respectively. Our method achieved a mean absolute error of 0.58 on DRR and a structural similarity of 78% on X-ray, outperforming baseline methods. Extensive ablation studies demonstrate that fine registration and domain adaptation significantly improve registration performance.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"100 ","pages":"Article 107145"},"PeriodicalIF":4.9000,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809424012035","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
The 2D/3D rigid registration between preoperative 3D CT and intraoperative 2D X-ray is a crucial step in minimally invasive pelvic surgery. The deep learning-based 2D/3D registration methods address the inefficiencies of traditional approaches. However, the wide range of spatial transformation parameters and other complexities pose significant challenges for achieving accurate registration in a single step. Additionally, the stylistic differences between Digitally Reconstructed Radiographs (DRRs) used in training and real X-ray images limit the practical applicability of most methods. To overcome these challenges, we propose a 2D/3D fast registration framework comprising a coarse registration network, fine registration based on key point tracking and alignment, and domain adaptation. Coarse registration using plug-and-play attention is introduced to preliminarily estimate transformation parameters. Then we design a key point tracking network to match key points between different images, and leverage points alignment to achieve fine registration. To address the stylistic differences between DRR and X-ray images, we investigate a domain adaptation network. The experiments were conducted on DRR and X-ray images, respectively. Our method achieved a mean absolute error of 0.58 on DRR and a structural similarity of 78% on X-ray, outperforming baseline methods. Extensive ablation studies demonstrate that fine registration and domain adaptation significantly improve registration performance.
术前三维 CT 和术中二维 X 光片之间的二维/三维刚性配准是微创盆腔手术的关键步骤。基于深度学习的 2D/3D 配准方法解决了传统方法效率低下的问题。然而,广泛的空间变换参数和其他复杂性对一步实现精确配准构成了巨大挑战。此外,用于训练的数字重建放射照片(DRR)与真实 X 光图像之间的风格差异也限制了大多数方法的实际适用性。为了克服这些挑战,我们提出了一种 2D/3D 快速配准框架,包括粗配准网络、基于关键点跟踪和配准的精配准以及域适应。我们首先介绍了使用即插即用注意力进行粗配准的方法,以初步估计变换参数。然后,我们设计了一个关键点跟踪网络来匹配不同图像之间的关键点,并利用点对齐来实现精细配准。针对 DRR 和 X 光图像之间的风格差异,我们研究了一种域自适应网络。实验分别在 DRR 和 X 光图像上进行。我们的方法在 DRR 图像上的平均绝对误差为 0.58,在 X 光图像上的结构相似度为 78%,优于基线方法。广泛的消融研究表明,精细配准和域自适应显著提高了配准性能。
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.