Feng Qu, Min Zhang, Weili Shi, Wei He, Zhengang Jiang
{"title":"Transformer-based 2D/3D medical image registration for X-ray to CT via anatomical features","authors":"Feng Qu, Min Zhang, Weili Shi, Wei He, Zhengang Jiang","doi":"10.1002/rcs.2619","DOIUrl":null,"url":null,"abstract":"<div>\n \n \n <section>\n \n <h3> Background</h3>\n \n <p>2D/3D medical image registration is one of the key technologies for surgical navigation systems to perform pose estimation and achieve accurate positioning, which still remains challenging. The purpose of this study is to introduce a new method for X-ray to CT 2D/3D registration and conduct a feasibility study.</p>\n </section>\n \n <section>\n \n <h3> Methods</h3>\n \n <p>In this study, a 2D/3D affine registration method based on feature point detection is investigated. It combines the morphological and edge features of spinal images to accurately extract feature points from the images, and uses graph neural networks to aggregate anatomical features of different points to increase the local detail information. Meanwhile, global and positional information are extracted by the Swin Transformer.</p>\n </section>\n \n <section>\n \n <h3> Results</h3>\n \n <p>The results indicate that the proposed method has shown improvements in both accuracy and success ratio compared with other methods. The mean target registration error value reached up to 0.31 mm; meanwhile, the runtime overhead was much lower, achieving an average runtime of about 0.6 s. This ultimately improves the registration accuracy and efficiency, demonstrating the effectiveness of the proposed method.</p>\n </section>\n \n <section>\n \n <h3> Conclusions</h3>\n \n <p>The proposed method can provide more comprehensive image information and shows good prospects for pose estimation and achieving accurate positioning in surgical navigation systems.</p>\n </section>\n </div>","PeriodicalId":50311,"journal":{"name":"International Journal of Medical Robotics and Computer Assisted Surgery","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Medical Robotics and Computer Assisted Surgery","FirstCategoryId":"3","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/rcs.2619","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SURGERY","Score":null,"Total":0}
引用次数: 0
Abstract
Background
2D/3D medical image registration is one of the key technologies for surgical navigation systems to perform pose estimation and achieve accurate positioning, which still remains challenging. The purpose of this study is to introduce a new method for X-ray to CT 2D/3D registration and conduct a feasibility study.
Methods
In this study, a 2D/3D affine registration method based on feature point detection is investigated. It combines the morphological and edge features of spinal images to accurately extract feature points from the images, and uses graph neural networks to aggregate anatomical features of different points to increase the local detail information. Meanwhile, global and positional information are extracted by the Swin Transformer.
Results
The results indicate that the proposed method has shown improvements in both accuracy and success ratio compared with other methods. The mean target registration error value reached up to 0.31 mm; meanwhile, the runtime overhead was much lower, achieving an average runtime of about 0.6 s. This ultimately improves the registration accuracy and efficiency, demonstrating the effectiveness of the proposed method.
Conclusions
The proposed method can provide more comprehensive image information and shows good prospects for pose estimation and achieving accurate positioning in surgical navigation systems.
期刊介绍:
The International Journal of Medical Robotics and Computer Assisted Surgery provides a cross-disciplinary platform for presenting the latest developments in robotics and computer assisted technologies for medical applications. The journal publishes cutting-edge papers and expert reviews, complemented by commentaries, correspondence and conference highlights that stimulate discussion and exchange of ideas. Areas of interest include robotic surgery aids and systems, operative planning tools, medical imaging and visualisation, simulation and navigation, virtual reality, intuitive command and control systems, haptics and sensor technologies. In addition to research and surgical planning studies, the journal welcomes papers detailing clinical trials and applications of computer-assisted workflows and robotic systems in neurosurgery, urology, paediatric, orthopaedic, craniofacial, cardiovascular, thoraco-abdominal, musculoskeletal and visceral surgery. Articles providing critical analysis of clinical trials, assessment of the benefits and risks of the application of these technologies, commenting on ease of use, or addressing surgical education and training issues are also encouraged. The journal aims to foster a community that encompasses medical practitioners, researchers, and engineers and computer scientists developing robotic systems and computational tools in academic and commercial environments, with the intention of promoting and developing these exciting areas of medical technology.