{"title":"G-SET-DCL: a guided sequential episodic training with dual contrastive learning approach for colon segmentation.","authors":"Samir Farag Harb, Asem Ali, Mohamed Yousuf, Salwa Elshazly, Aly Farag","doi":"10.1007/s11548-024-03319-4","DOIUrl":"10.1007/s11548-024-03319-4","url":null,"abstract":"<p><strong>Purpose: </strong>This article introduces a novel deep learning approach to substantially improve the accuracy of colon segmentation even with limited data annotation, which enhances the overall effectiveness of the CT colonography pipeline in clinical settings.</p><p><strong>Methods: </strong>The proposed approach integrates 3D contextual information via guided sequential episodic training in which a query CT slice is segmented by exploiting its previous labeled CT slice (i.e., support). Segmentation starts by detecting the rectum using a Markov Random Field-based algorithm. Then, supervised sequential episodic training is applied to the remaining slices, while contrastive learning is employed to enhance feature discriminability, thereby improving segmentation accuracy.</p><p><strong>Results: </strong>The proposed method, evaluated on 98 abdominal scans of prepped patients, achieved a Dice coefficient of 97.3% and a polyp information preservation accuracy of 98.28%. Statistical analysis, including 95% confidence intervals, underscores the method's robustness and reliability. Clinically, this high level of accuracy is vital for ensuring the preservation of critical polyp details, which are essential for accurate automatic diagnostic evaluation. The proposed method performs reliably in scenarios with limited annotated data. This is demonstrated by achieving a Dice coefficient of 97.15% when the model was trained on a smaller number of annotated CT scans (e.g., 10 scans) than the testing dataset (e.g., 88 scans).</p><p><strong>Conclusions: </strong>The proposed sequential segmentation approach achieves promising results in colon segmentation. A key strength of the method is its ability to generalize effectively, even with limited annotated datasets-a common challenge in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"279-287"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph L Cozzi, Hui Li, Jordan D Fuhrman, Li Lan, Jelani Williams, Brendan Finnerty, Thomas J Fahey, Abhinay Tumati, Joshua Genender, Xavier M Keutgen, Maryellen L Giger
{"title":"Multi-institutional development and testing of attention-enhanced deep learning segmentation of thyroid nodules on ultrasound.","authors":"Joseph L Cozzi, Hui Li, Jordan D Fuhrman, Li Lan, Jelani Williams, Brendan Finnerty, Thomas J Fahey, Abhinay Tumati, Joshua Genender, Xavier M Keutgen, Maryellen L Giger","doi":"10.1007/s11548-024-03294-w","DOIUrl":"10.1007/s11548-024-03294-w","url":null,"abstract":"<p><strong>Purpose: </strong>Thyroid nodules are common, and ultrasound-based risk stratification using ACR's TIRADS classification is a key step in predicting nodule pathology. Determining thyroid nodule contours is necessary for the calculation of TIRADS scores and can also be used in the development of machine learning nodule diagnosis systems. This paper presents the development, validation, and multi-institutional independent testing of a machine learning system for the automatic segmentation of thyroid nodules on ultrasound.</p><p><strong>Methods: </strong>The datasets, containing a total of 1595 thyroid ultrasound images from 520 patients with thyroid nodules, were retrospectively collected under IRB approval from University of Chicago Medicine (UCM) and Weill Cornell Medical Center (WCMC). Nodules were manually contoured by a team of UCM and WCMC physicians for ground truth. An AttU-Net, a U-Net architecture with additional attention weighting functions, was trained for the segmentations. The algorithm was validated through fivefold cross-validation by nodule and was tested on two independent test sets: one from UCM and one from WCMC. Dice similarity coefficient (DSC) and percent Hausdorff distance (%HD), Hausdorff distance reported as a percent of the nodule's effective diameter, served as the performance metrics.</p><p><strong>Results: </strong>On multi-institutional independent testing, the AttU-Net yielded average DSCs (std. deviation) of 0.915 (0.04) and 0.922 (0.03) and %HDs (std. deviation) of 12.9% (4.6) and 13.4% (6.3) on the UCM and WCMC test sets, respectively. Similarity testing showed the algorithm's performance on the two institutional test sets was equivalent up to margins of <math><mi>Δ</mi></math> DSC <math><mo>≤</mo></math> 0.013 and <math><mi>Δ</mi></math> %HD <math><mo>≤</mo></math> 1.73%.</p><p><strong>Conclusions: </strong>This work presents a robust automatic thyroid nodule segmentation algorithm that could be implemented for risk stratification systems. Future work is merited to incorporate this segmentation method within an automatic thyroid classification system.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"259-267"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142923665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synchronising a stereoscopic surgical video stream using specular reflection.","authors":"Kilian Chandelon, Adrien Bartoli","doi":"10.1007/s11548-024-03232-w","DOIUrl":"10.1007/s11548-024-03232-w","url":null,"abstract":"<p><strong>Purpose: </strong>A stereoscopic surgical video stream consists of left-right image pairs provided by a stereo endoscope. While the surgical display shows these image pairs synchronised, most capture cards cause de-synchronisation. This means that the paired left and right images may not correspond once used in downstream tasks such as stereo depth computation. The stereo synchronisation problem is to recover the corresponding left-right images. This is particularly challenging in the surgical setting, owing to the moist tissues, rapid camera motion, quasi-staticity and real-time processing requirement. Existing methods exploit image cues from the diffuse reflection component and are defeated by the above challenges.</p><p><strong>Methods: </strong>We propose to exploit the specular reflection. Specifically, we propose a powerful left-right comparison score (LRCS) using the specular highlights commonly occurring on moist tissues. We detect the highlights using a neural network, characterise them with invariant descriptors, match them, and use the number of matches to form the proposed LRCS. We perform evaluation against 147 existing LRCS in 44 challenging robotic partial nephrectomy and robotic-assisted hepatic resection video sequences with simulated and real de-synchronisation.</p><p><strong>Results: </strong>The proposed LRCS outperforms, with an average and maximum offsets of 0.055 and 1 frames and 94.1±3.6% successfully synchronised frames. In contrast, the best existing LRCS achieves an average and maximum offsets of 0.3 and 3 frames and 81.2±6.4% successfully synchronised frames.</p><p><strong>Conclusion: </strong>The use of specular reflection brings a tremendous boost to the real-time surgical stereo synchronisation problem.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"289-299"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141762451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M Mendez, F Castillo, L Probyn, S Kras, P N Tyrrell
{"title":"Leveraging domain knowledge for synthetic ultrasound image generation: a novel approach to rare disease AI detection.","authors":"M Mendez, F Castillo, L Probyn, S Kras, P N Tyrrell","doi":"10.1007/s11548-024-03309-6","DOIUrl":"10.1007/s11548-024-03309-6","url":null,"abstract":"<p><strong>Purpose: </strong>This study explores the use of deep generative models to create synthetic ultrasound images for the detection of hemarthrosis in hemophilia patients. Addressing the challenge of sparse datasets in rare disease diagnostics, the study aims to enhance AI model robustness and accuracy through the integration of domain knowledge into the synthetic image generation process.</p><p><strong>Methods: </strong>The study employed two ultrasound datasets: a base dataset (Db) of knee recess distension images from non-hemophiliac patients and a target dataset (Dt) of hemarthrosis images from hemophiliac patients. The synthetic generation framework included a content generator (Gc) trained on Db and a context generator (Gs) to adapt these images to match Dt's context. This approach generated a synthetic target dataset (Ds), primed for AI training in rare disease research. The assessment of synthetic image generation involved expert evaluations, statistical analysis, and the use of domain-invariant perceptual distance and Fréchet inception distance for quality measurement.</p><p><strong>Results: </strong>Expert evaluation revealed that images produced by our synthetic generation framework were comparable to real ones, with no significant difference in overall quality or anatomical accuracy. Additionally, the use of synthetic data in training convolutional neural networks demonstrated robustness in detecting hemarthrosis, especially with limited sample sizes.</p><p><strong>Conclusion: </strong>This study presents a novel approach for generating synthetic ultrasound images for rare disease detection, such as hemarthrosis in hemophiliac knees. By leveraging deep generative models and integrating domain knowledge, the proposed framework successfully addresses the limitations of sparse datasets and enhances AI model training and robustness. The synthetic images produced are of high quality and contribute significantly to AI-driven diagnostics in rare diseases, highlighting the potential of synthetic data in medical imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"415-431"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142911124","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fan Yang, Qiming He, Yanxia Wang, Siqi Zeng, Yingming Xu, Jing Ye, Yonghong He, Tian Guan, Zhe Wang, Jing Li
{"title":"Unsupervised stain augmentation enhanced glomerular instance segmentation on pathology images.","authors":"Fan Yang, Qiming He, Yanxia Wang, Siqi Zeng, Yingming Xu, Jing Ye, Yonghong He, Tian Guan, Zhe Wang, Jing Li","doi":"10.1007/s11548-024-03154-7","DOIUrl":"10.1007/s11548-024-03154-7","url":null,"abstract":"<p><strong>Purpose: </strong>In pathology images, different stains highlight different glomerular structures, so a supervised deep learning-based glomerular instance segmentation model trained on individual stains performs poorly on other stains. However, it is difficult to obtain a training set with multiple stains because the labeling of pathology images is very time-consuming and tedious. Therefore, in this paper, we proposed an unsupervised stain augmentation-based method for segmentation of glomerular instances.</p><p><strong>Methods: </strong>In this study, we successfully realized the conversion between different staining methods such as PAS, MT and PASM by contrastive unpaired translation (CUT), thus improving the staining diversity of the training set. Moreover, we replaced the backbone of mask R-CNN with swin transformer to further improve the efficiency of feature extraction and thus achieve better performance in instance segmentation task.</p><p><strong>Results: </strong>To validate the method presented in this paper, we constructed a dataset from 216 WSIs of the three stains in this study. After conducting in-depth experiments, we verified that the instance segmentation method based on stain augmentation outperforms existing methods across all metrics for PAS, PASM, and MT stains. Furthermore, ablation experiments are performed in this paper to further demonstrate the effectiveness of the proposed module.</p><p><strong>Conclusion: </strong>This study successfully demonstrated the potential of unsupervised stain augmentation to improve glomerular segmentation in pathology analysis. Future research could extend this approach to other complex segmentation tasks in the pathology image domain to further explore the potential of applying stain augmentation techniques in different domains of pathology image analysis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"225-236"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preoperative and intraoperative laparoscopic liver surface registration using deep graph matching of representative overlapping points.","authors":"Yue Dai, Xiangyue Yang, Junchen Hao, Huoling Luo, Guohui Mei, Fucang Jia","doi":"10.1007/s11548-024-03312-x","DOIUrl":"10.1007/s11548-024-03312-x","url":null,"abstract":"<p><strong>Purpose: </strong>In laparoscopic liver surgery, registering preoperative CT-extracted 3D models with intraoperative laparoscopic video reconstructions of the liver surface can help surgeons predict critical liver anatomy. However, the registration process is challenged by non-rigid deformation of the organ due to intraoperative pneumoperitoneum pressure, partial visibility of the liver surface, and surface reconstruction noise.</p><p><strong>Methods: </strong>First, we learn point-by-point descriptors and encode location information to alleviate the limitations of descriptors in location perception. In addition, we introduce a GeoTransformer to enhance the geometry perception to cope with the problem of inconspicuous liver surface features. Finally, we construct a deep graph matching module to optimize the descriptors and learn overlap masks to robustly estimate the transformation parameters based on representative overlap points.</p><p><strong>Results: </strong>Evaluation of our method with comparative methods on both simulated and real datasets shows that our method achieves state-of-the-art results, realizing the lowest surface registration error(SRE) 4.12 mm with the highest inlier ratios (IR) 53.31% and match scores (MS) 28.17%.</p><p><strong>Conclusion: </strong>Highly accurate and robust initialized registration obtained from partial information can be achieved while meeting the speed requirement. Non-rigid registration can further enhance the accuracy of the registration process on this basis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"269-278"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142910423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benfang Duan, Biao Jia, Cheng Wang, Shijia Chen, Jun Xu, Gao-Jun Teng
{"title":"Optimization of percutaneous intervention robotic system for skin insertion force.","authors":"Benfang Duan, Biao Jia, Cheng Wang, Shijia Chen, Jun Xu, Gao-Jun Teng","doi":"10.1007/s11548-024-03274-0","DOIUrl":"10.1007/s11548-024-03274-0","url":null,"abstract":"<p><strong>Purpose: </strong>Percutaneous puncture is a common interventional procedure, and its effectiveness is influenced by the insertion force of the needle. To optimize outcomes, we focus on reducing the peak force of the needle in the skin, aiming to apply this method to other tissue layers.</p><p><strong>Methods: </strong>We developed a clinical puncture system, setting and measuring various variables. We analyzed their effects, introduced admittance control, set thresholds, and adjusted parameters. Finally, we validated these methods to ensure their effectiveness.</p><p><strong>Results: </strong>Our system meets application requirements. We assessed the impact of various variables on peak force and validated the effectiveness of the new method. Results show a reduction of about 50% in peak force compared to the maximum force condition and about 13% compared to the minimum force condition. Finally, we summarized the factors to consider when applying this method.</p><p><strong>Conclusion: </strong>To achieve peak force suppression, initial puncture variables should be set based on the trends in variable impact. Additionally, the factors of the new method should be introduced using these initial settings. When selecting these factors, the characteristics of the new method must also be considered. This process will help to better optimize peak puncture force.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"345-355"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142607345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lukas Mohl, Roger Karl, Matthias N Hagedorn, Armin Runz, Stephan Skornitzke, Malte Toelle, C Soeren Bergt, Johannes Hatzl, Christian Uhl, Dittmar Böckler, Katrin Meisenbacher, Sandy Engelhardt
{"title":"Simulation of thoracic endovascular aortic repair in a perfused patient-specific model of type B aortic dissection.","authors":"Lukas Mohl, Roger Karl, Matthias N Hagedorn, Armin Runz, Stephan Skornitzke, Malte Toelle, C Soeren Bergt, Johannes Hatzl, Christian Uhl, Dittmar Böckler, Katrin Meisenbacher, Sandy Engelhardt","doi":"10.1007/s11548-024-03190-3","DOIUrl":"10.1007/s11548-024-03190-3","url":null,"abstract":"<p><strong>Purpose: </strong>Complicated type B Aortic dissection is a severe aortic pathology that requires treatment through thoracic endovascular aortic repair (TEVAR). During TEVAR a stentgraft is deployed in the aortic lumen in order to restore blood flow. Due to the complicated pathology including an entry, a resulting dissection wall with potentially several re-entries, replicating this structure artificially has proven to be challenging thus far.</p><p><strong>Methods: </strong>We developed a 3d printed, patient-specific and perfused aortic dissection phantom with a flexible dissection flap and all major branching vessels. The model was segmented from CTA images and fabricated out of a flexible material to mimic aortic wall tissue. It was placed in a pulsatile hemodynamic flow loop. Hemodynamics were investigated through pressure and flow measurements and doppler ultrasound imaging. Surgeons performed a TEVAR intervention including stentgraft deployment under fluoroscopic guidance.</p><p><strong>Results: </strong>The flexible aortic dissection phantom was successfully incorporated in the hemodynamic flow loop, a systolic pressure of 112 mmHg and physiological flow of 4.05 L per minute was reached. Flow velocities were higher in true lumen with a up to 35.7 cm/s compared to the false lumen with a maximum of 13.3 cm/s, chaotic flow patterns were observed on main entry and reentry sights. A TEVAR procedure was successfully performed under fluoroscopy. The position of the stentgraft was confirmed using CTA imaging.</p><p><strong>Conclusions: </strong>This perfused in-vitro phantom allows for detailed investigation of the complex inner hemodynamics of aortic dissections on a patient-specific level and enables the simulation of TEVAR procedures in a real endovascular operating environment. Therefore, it could provide a dynamic platform for future surgical training and research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"391-404"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11807923/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141285347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L M N Aukema, A F de Geer, M J A van Alphen, W H Schreuder, R L P van Veen, T J M Ruers, F J Siepel, M B Karakullukcu
{"title":"Hybrid registration of the fibula for electromagnetically navigated osteotomies in mandibular reconstructive surgery: a phantom study.","authors":"L M N Aukema, A F de Geer, M J A van Alphen, W H Schreuder, R L P van Veen, T J M Ruers, F J Siepel, M B Karakullukcu","doi":"10.1007/s11548-024-03282-0","DOIUrl":"10.1007/s11548-024-03282-0","url":null,"abstract":"<p><strong>Purpose: </strong>In mandibular reconstructive surgery with free fibula flap, 3D-printed patient-specific cutting guides are the current state of the art. Although these guides enable accurate transfer of the virtual surgical plan to the operating room, disadvantages include long waiting times until surgery and the inability to change the virtual plan intraoperatively in case of tumor growth. Alternatively, (electromagnetic) surgical navigation combined with a non-patient-specific cutting guide could be used, requiring accurate image-to-patient registration. In this phantom study, we evaluated the accuracy of a hybrid registration method for the fibula and the additional error that is caused by navigating with a prototype of a novel non-patient-specific cutting guide to virtually planned osteotomy planes.</p><p><strong>Methods: </strong>The accuracy of hybrid registration and navigation was assessed in terms of target registration error (TRE), angular difference, and length difference of the intended fibula segments using three 3D-printed fibular phantoms with assessment points on osteotomy planes. Using electromagnetic tracking, hybrid registration was performed with point registration followed by surface registration on the lateral fibular surface. The fibula was fixated in the non-patient-specific cutting guide to navigate to planned osteotomy planes after which the accuracy was assessed.</p><p><strong>Results: </strong>Registration was achieved with a mean TRE, angular difference, and segment length difference of 2.3 ± 0.9 mm, 2.1 ± 1.4°, and 0.3 ± 0.3 mm respectively after hybrid registration. Navigation with the novel cutting guide increased the length difference (0.7 ± 0.6 mm), but decreased the angular difference (1.8 ± 1.3°).</p><p><strong>Conclusion: </strong>Hybrid registration showed to be a feasible and noninvasive method to register the fibula in phantom setup and could be used for electromagnetically navigated osteotomies with a novel non-patient-specific cutting guide. Future studies should focus on testing this registration method in clinical setting.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"369-377"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142711739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M A J Hiep, W J Heerink, H C Groen, L Aguilera Saiz, B A Grotenhuis, G L Beets, A G J Aalbers, K F D Kuhlmann, T J M Ruers
{"title":"Real-time intraoperative ultrasound registration for accurate surgical navigation in patients with pelvic malignancies.","authors":"M A J Hiep, W J Heerink, H C Groen, L Aguilera Saiz, B A Grotenhuis, G L Beets, A G J Aalbers, K F D Kuhlmann, T J M Ruers","doi":"10.1007/s11548-024-03299-5","DOIUrl":"10.1007/s11548-024-03299-5","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical navigation aids surgeons in localizing and adequately resecting pelvic malignancies. Accuracy of the navigation system highly depends on the preceding registration procedure, which is generally performed using intraoperative fluoroscopy or CT. However, these ionizing methods are time-consuming and peroperative updates of the registration are cumbersome. In this present clinical study, several real-time intraoperative ultrasound (iUS) registration methods have been developed and evaluated for accuracy.</p><p><strong>Methods: </strong>During laparotomy in prospectively included patients, sterile electromagnetically tracked iUS acquisitions of the pelvic vessels and bones were collected. An initial registration and five other rigid iUS registration methods were developed including real-time deep learning bone and artery segmentation of 2D ultrasound. For each registration method, the accuracy was computed as the target registration error (TRE) using pelvic lymph nodes (LNs) as targets.</p><p><strong>Results: </strong>Thirty patients were included. The mean ± SD ultrasound acquisition time was 4.2 ± 1.4 min for the pelvic bone and 4.0 ± 1.1 min for the arteries. Deep learning bone and artery ultrasound segmentation resulted in an average (centerline)Dice of 0.85 and a mean surface distance below 2 mm. In 21 patients with visible LNs, initial registration resulted in a median (interquartile range [IQR]) TRE of 7.4 (5.9-10.9) mm. For the other five methods, 2D and 3D bone registration resulted in significantly lower TREs than 2D artery, 3D artery and bifurcation registration (two-sided Wilcoxon rank-sum test p < 0.01). The real-time 2D bone registration method was most accurate with a median (IQR) TRE of 2.6 (1.3-5.7) mm.</p><p><strong>Conclusion: </strong>Real-time 2D iUS bone registration is a fast and accurate method for patient registration prior to surgical navigation and has advantages over current registration techniques. Because of the user dependency of iUS, intuitive software is crucial for optimal clinical implementation. Trial registration number ClinicalTrials.gov (No. NCT05637346).</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"249-258"},"PeriodicalIF":2.3,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142781677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}