Yuyang Zhang, Gongning Luo, Wei Wang, Shaodong Cao, Suyu Dong, Daren Yu, Xiaoyun Wang, Kuanquan Wang
{"title":"BEA-CACE: branch-endpoint-aware double-DQN for coronary artery centerline extraction in CT angiography images.","authors":"Yuyang Zhang, Gongning Luo, Wei Wang, Shaodong Cao, Suyu Dong, Daren Yu, Xiaoyun Wang, Kuanquan Wang","doi":"10.1007/s11548-025-03483-1","DOIUrl":"10.1007/s11548-025-03483-1","url":null,"abstract":"<p><strong>Purpose: </strong>In order to automate the centerline extraction of the coronary tree, three challenges must be addressed: tracking branches automatically, passing through plaques successfully, and detecting endpoints accurately. This study aims to develop a method to solve the three challenges.</p><p><strong>Methods: </strong>We propose a branch-endpoint-aware coronary centerline extraction framework. The framework consists of a deep reinforcement learning-based tracker and a 3D dilated CNN-based detector. The tracker is designed to predict the actions of an agent with the objective of tracking the centerline. The detector identifies bifurcation points and endpoints, assisting the tracker in tracking branches and terminating the tracking process automatically. The detector can also estimate the radius values of the coronary artery.</p><p><strong>Results: </strong>The method achieves the state-of-the-art performance in both the centerline extraction and radius estimate. Furthermore, the method necessitates minimal user interaction to extract a coronary tree, a feature that surpasses other interactive methods.</p><p><strong>Conclusion: </strong>The method can track branches automatically, pass through plaques successfully and detect endpoints accurately. Compared with other interactive methods that require multiple seeds, our method only needs one seed to extract the entire coronary tree.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2131-2143"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144765756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rick M Butler, Anne M Schouten, Anne C van der Eijk, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen
{"title":"Towards automatic quantification of operating table interaction in operating rooms.","authors":"Rick M Butler, Anne M Schouten, Anne C van der Eijk, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen","doi":"10.1007/s11548-025-03363-8","DOIUrl":"10.1007/s11548-025-03363-8","url":null,"abstract":"<p><strong>Purpose: </strong>Perioperative staff shortages are a problem in hospitals worldwide. Keeping the staff content and motivated is a challenge in the busy hospital setting of today. New operating room technologies aim to increase safety and efficiency. This causes a shift from interaction with patients to interaction with technology. Objectively measuring this shift could aid the design of supportive technological products, or optimal planning for high-tech procedures.</p><p><strong>Methods: </strong>35 Gynaecological procedures of three different technology levels are recorded: open- (OS), minimally invasive- (MIS) and robot-assisted (RAS) surgery. We annotate interaction between staff and the patient. An algorithm is proposed that detects interaction with the operating table from staff posture and movement. Interaction is expressed as a percentage of total working time.</p><p><strong>Results: </strong>The proposed algorithm measures operating table interactions of 70.4%, 70.3% and 30.1% during OS, MIS and RAS. Annotations yield patient interaction percentages of 37.6%, 38.3% and 24.6%. Algorithm measurements over time show operating table and patient interaction peaks at anomalous events or workflow phase transitions.</p><p><strong>Conclusions: </strong>The annotations show less operating table and patient interactions during RAS than OS and MIS. Annotated patient interaction and measured operating table interaction show similar differences between procedures and workflow phases. The visual complexity of operating rooms complicates pose tracking, deteriorating the algorithm input quality. The proposed algorithm shows promise as a component in context-aware event- or workflow phase detection.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1999-2010"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Viktor Vörös, Xuan Thao Ha, Wim-Alexander Beckers, Johan Bennett, Tom Kimpe, Emmanuel Vander Poorten
{"title":"Hybrid 3D augmented reality for image-guided therapy using autostereoscopic visualization.","authors":"Viktor Vörös, Xuan Thao Ha, Wim-Alexander Beckers, Johan Bennett, Tom Kimpe, Emmanuel Vander Poorten","doi":"10.1007/s11548-025-03357-6","DOIUrl":"10.1007/s11548-025-03357-6","url":null,"abstract":"<p><strong>Purpose: </strong>During image-guided therapy, cardiologists use 2-dimensional (2D) imaging modalities to navigate the catheters, resulting in a loss of depth perception. Augmented reality (AR) is being explored to overcome the challenges, by visualizing patient-specific 3D models or 3D shape of the catheter. However, when this 3D content is presented on a 2D display, important depth information may be lost. This paper proposes a hybrid 3D AR visualization method combining stereo 3D AR guidance with conventional 2D modalities.</p><p><strong>Methods: </strong>A cardiovascular catheterization simulator was developed consisting of a phantom vascular model, a catheter with embedded shape sensing, and an autostereoscopic display. A user study involving interventional cardiologists ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>5</mn></mrow> </math> ) and electrophysiologists ( <math><mrow><mi>n</mi> <mo>=</mo> <mn>2</mn></mrow> </math> ) was set up. The study compared the hybrid 3D AR guidance with simulated fluoroscopy and 2D AR guidance in a catheter navigation task.</p><p><strong>Results: </strong>Despite improvements in task time and traveled path length, the difference in performance was not significant. However, a reduction of 50% and 81% with 2D and hybrid 3D AR in the number of incorrect artery entries was found, respectively. The results of the questionnaires showed a reduced mental load and a higher confidence with the proposed hybrid 3D AR guidance. All but one participant indicated to feel comfortable looking at the hybrid 3D view.</p><p><strong>Conclusion: </strong>The findings suggest that AR guidance, particularly in a hybrid 3D visualization format, enhances spatial awareness and reduces mental load for cardiologists. The autostereoscopic 3D view demonstrated superiority in estimating the pose and relationship of the catheter relative to the vascular model.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2145-2152"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144008284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alistair Weld, Luke Dixon, Michael Dyck, Giulio Anichini, Alex Ranne, Sophie Camp, Stamatia Giannarou
{"title":"Identifying visible tissue in intraoperative ultrasound: a method and application.","authors":"Alistair Weld, Luke Dixon, Michael Dyck, Giulio Anichini, Alex Ranne, Sophie Camp, Stamatia Giannarou","doi":"10.1007/s11548-025-03415-z","DOIUrl":"10.1007/s11548-025-03415-z","url":null,"abstract":"<p><strong>Purpose: </strong>Intraoperative ultrasound scanning is a demanding visuotactile task. It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe, making sure not to apply excessive force or breaking contact with the tissue, while also characterising the visible tissue.</p><p><strong>Method: </strong>To analyse the probe-tissue contact, an iterative filtering and topological method is proposed to identify the underlying visible tissue, which can be used to detect acoustic shadow and construct confidence maps of perceptual salience.</p><p><strong>Results: </strong>For evaluation, datasets containing both in vivo and medical phantom data are created. A suite of evaluations is performed, including an evaluation of acoustic shadow classification. Compared to an ablation, deep learning, and statistical method, the proposed approach achieves superior classification on in vivo data, achieving an <math><msub><mi>F</mi> <mi>β</mi></msub> </math> score of 0.864, in comparison with 0.838, 0.808, and 0.808. A novel framework for evaluating the confidence estimation of probe-tissue contact is created. The phantom data are captured specifically for this, and comparison is made against two established methods. The proposed method produced the superior response, achieving an average normalised root-mean-square error of 0.168, in comparison with 1.836 and 4.542. Evaluation is also extended to determine the algorithm's robustness to parameter perturbation, speckle noise, data distribution shift, and capability for guiding a robotic scan.</p><p><strong>Conclusion: </strong>The results of this comprehensive set of experiments justify the potential clinical value of the proposed algorithm, which can be used to support clinical training and robotic ultrasound automation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2107-2117"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144531002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fast and robust geometric point cloud registration model for orthopedic surgery with noisy and incomplete data.","authors":"Jiashi Zhao, Zihan Xu, Fei He, Jianhua Liu, Zhengang Jiang","doi":"10.1007/s11548-025-03387-0","DOIUrl":"10.1007/s11548-025-03387-0","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate registration of partial-to-partial point clouds is crucial in computer-assisted orthopedic surgery but faces challenges due to incomplete data, noise, and partial overlap. This paper proposes a novel geometric fast registration (GFR) model that addresses these issues through three core modules: point extractor registration (PER), dual attention transformer (DAT), and geometric feature matching (GFM).</p><p><strong>Methods: </strong>PER operates within the frequency domain to enhance point cloud data by attenuating noise and reconstructing incomplete regions. DAT augments feature representation by correlating independent features from source and target point clouds, improving model expressiveness. GFM identifies geometrically consistent point pairs, completing missing data and refining registration accuracy.</p><p><strong>Results: </strong>We conducted experiments using the clinical bone dataset of 1432 distinct human skeletal samples, comprising ribs, scapulae, and fibula. The proposed model exhibited remarkable robustness and versatility, demonstrating consistent performance across diverse bone structures. When evaluated to noisy, partial-to-partial point clouds with incomplete bone data, the model achieved a mean squared error of 3.57 for rotation and a mean absolute error of 1.29. The mean squared error for translation was 0.002, with a mean absolute error of 0.038.</p><p><strong>Conclusion: </strong>Our proposed GFR model exhibits exceptional speed and universality, effectively handling point clouds with defects, noise, and partial overlap. Extensive experiments conducted on bone datasets demonstrate the superior performance of our model compared to state-of-the-art methods. The code is publicly available at https://github.com/xzh128/PER .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2053-2063"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144007642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing surgical efficiency with an automated scrub nurse robot: a focus on automatic instrument insertion.","authors":"Kitaro Yoshimitsu, Ken Masamune, Fujio Miyawaki","doi":"10.1007/s11548-025-03433-x","DOIUrl":"10.1007/s11548-025-03433-x","url":null,"abstract":"<p><strong>Purpose: </strong>To address the chronic shortage of skilled scrub nurses, we propose the development of a scrub nurse robot (SNR). This paper describes the third-generation of our SNR, which implements the automatic insertion of surgical instruments (AISI). We focused on optimizing the instrument provision part of the instrument exchange task, which is a crucial role of the scrub nurse.</p><p><strong>Methods: </strong>The third-generation SNR detects the moment when an operating surgeon withdraws an instrument after use from the trocar cannula, automatically conveys the next instrument to the cannula, and inserts only its tip into the cannula. Thereafter, the surgeon is required to grip the instrument and to push it fully into the cannula. This robotic function is designated as AISI. The following three combinations were compared: (1) third-generation SNR and surgeon stand-ins in a laboratory experiment, (2) three human scrub nurses and a skilled expert surgeon in three real surgical cases, (3) second-generation SNR and surgeon stand-ins in a laboratory experiment.</p><p><strong>Results: </strong>The third-generation SNR and surgeon stand-ins were 53% slower and 34% faster, respectively, in targeting the instruments during the instrument exchange sequence compared with the actual OR nurse-surgeon pair and the second-generation SNR-stand-in pair. The average \"eyes-off\" time of the stand-ins assisted by the third-generation SNR was 0.41 s (0 s in 92 out of 138 trials), whereas that of the real surgeon in clinical cases had a mean of 1.47 (N = 138) (range, 0.69-7.24 s) when using the second-generation SNR.</p><p><strong>Conclusion: </strong>Third-generation SNR with AISI can enhance operative efficiency by contributing to smooth instrument exchange, which enhances the surgeon's ability to concentrate on a surgical procedure without interrupting the intraoperative surgical rhythm.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1975-1985"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andy S Ding, Nimesh V Nagururu, Stefanie Seo, George S Liu, Manish Sahu, Russell H Taylor, Francis X Creighton
{"title":"Manual and automated facial de-identification techniques for patient imaging with preservation of sinonasal anatomy.","authors":"Andy S Ding, Nimesh V Nagururu, Stefanie Seo, George S Liu, Manish Sahu, Russell H Taylor, Francis X Creighton","doi":"10.1007/s11548-025-03421-1","DOIUrl":"10.1007/s11548-025-03421-1","url":null,"abstract":"<p><strong>Purpose: </strong>Facial recognition of reconstructed computed tomography (CT) scans poses patient privacy risks, necessitating reliable facial de-identification methods. Current methods obscure sinuses, turbinates, and other anatomy relevant for otolaryngology. We present a facial de-identification method that preserves these structures, along with two automated workflows for large-volume datasets.</p><p><strong>Methods: </strong>A total of 20 adult head CTs from the New Mexico Decedent Image Database were included. Using 3D Slicer, a seed-growing technique was performed to label the skin around the face. This label was dilated bidirectionally to form a 6-mm mask that obscures facial features. This technique was then automated using: (1) segmentation propagation that deforms an atlas head CT and corresponding mask to match other scans and (2) a deep learning model (nnU-Net). Accuracy of these methods against manually generated masks was evaluated with Dice scores and modified Hausdorff distances (mHDs).</p><p><strong>Results: </strong>Manual de-identification resulted in facial match rates of 45.0% (zero-fill), 37.5% (deletion), and 32.5% (re-face). Dice scores for automated face masks using segmentation propagation and nnU-Net were 0.667 ± 0.109 and 0.860 ± 0.029, respectively, with mHDs of 4.31 ± 3.04 mm and 1.55 ± 0.71 mm. Match rates after de-identification using segmentation propagation (zero-fill: 42.5%; deletion: 40.0%; re-face: 35.0%) and nnU-Net (zero-fill: 42.5%; deletion: 35.0%; re-face: 30.0%) were comparable to manual masks.</p><p><strong>Conclusion: </strong>We present a simple facial de-identification approach for head CTs, as well as automated methods for large-scale implementation. These techniques show promise for preventing patient identification while preserving underlying sinonasal anatomy, but further studies using live patient photographs are necessary to fully validate its effectiveness.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2167-2177"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144175663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wen Jing, Zixiang Jin, Yi Zhang, Guoxia Xu, Meng Zhao
{"title":"Cell generation with label evolution diffusion and class mask self-attention.","authors":"Wen Jing, Zixiang Jin, Yi Zhang, Guoxia Xu, Meng Zhao","doi":"10.1007/s11548-025-03443-9","DOIUrl":"10.1007/s11548-025-03443-9","url":null,"abstract":"<p><strong>Purpose: </strong>Due to the relative difficulty in acquiring histopathological images, the generated cell morphology often presents a fixed pattern and lacks diversity. To this end, we propose the first diffusion generation model based on point diffusion, which can capture the changes and diversity of cell morphology in more detail.</p><p><strong>Methods: </strong>By gradually updating the information of cell morphology during the generation process, we can effectively guide the diffusion model to generate more diverse and realistic cell images. In addition, we introduce a class mask self-attention module to constrain the cell types generated by the diffusion model.</p><p><strong>Results: </strong>We conducted experiments on the public dataset Lizard, and comparative analysis with previous image generation methods showed that our method has excellent performance. Compared with the latest NASDM network, our method achieves a 43.17% improvement in FID and a 46.24% enhancement in IS.</p><p><strong>Conclusions: </strong>We proposed a first-of-its-kind diffusion model that combines point diffusion and class mask self-attention mechanisms. The model can effectively generate diverse data while maintaining the high quality of generated images and performs well.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2179-2188"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Monocular suture needle pose detection using synthetic data augmented convolutional neural network.","authors":"Yifan Wang, Saul Alexis Heredia Perez, Kanako Harada","doi":"10.1007/s11548-025-03467-1","DOIUrl":"10.1007/s11548-025-03467-1","url":null,"abstract":"<p><strong>Purpose: </strong>Robotic microsurgery enhances the dexterity and stability of the surgeon to perform precise and delicate surgical procedures at microscopic level. Accurate needle pose estimation is critical for robotic micro-suturing, enabling optimized insertion trajectories and facilitating autonomous control. However, accurately estimating the pose of a needle during manipulation, particularly under monocular vision, remains a challenge. This study proposes a convolutional neural network-based method to estimate the pose of a suture needle from monocular images.</p><p><strong>Methods: </strong>The 3D pose of the needle is estimated using keypoints information from 2D images. A convolutional neural network was trained to estimate the positions of keypoints on the needle, specifically the tip, middle and end point. A hybrid dataset comprising images from both real-world and synthetic simulated environments was developed to train the model. Subsequently, an algorithm was designed to estimate the 3D positions of these keypoints. The 2D keypoint detection and 3D orientation estimation were evaluated by translation and orientation error metrics, respectively.</p><p><strong>Results: </strong>Experiments conducted on synthetic data showed that the average translation error of tip point, middle point and end point being 0.107 mm, 0.118 mm and 0.098 mm, and the average orientation angular error was 12.75 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> for normal vector and 15.55 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> for direction vector. When evaluated on real data, the method demonstrated 2D translation errors averaging 0.047 mm, 0.052 mm and 0.049 mm for the respective keypoints, with 93.85% of detected keypoints having errors below 4 pixels.</p><p><strong>Conclusions: </strong>This study presents a CNN-based method, augmented with synthetic images, to estimate the pose of a suture needle in monocular vision. Experimental results indicate that the method effectively estimates the 2D positions and 3D orientations of the suture needle in synthetic images. The model also shows reasonable performance with real data, highlighting its promise for real-time application in robotic microsurgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2019-2030"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518471/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C Magg, M A Ter Wee, G S Buijs, A J Kievit, M U Schafroth, J G G Dobbe, G J Streekstra, C I Sánchez, L Blankevoort
{"title":"Automation in tibial implant loosening detection using deep-learning segmentation.","authors":"C Magg, M A Ter Wee, G S Buijs, A J Kievit, M U Schafroth, J G G Dobbe, G J Streekstra, C I Sánchez, L Blankevoort","doi":"10.1007/s11548-025-03459-1","DOIUrl":"10.1007/s11548-025-03459-1","url":null,"abstract":"<p><strong>Purpose: </strong>Patients with recurrent complaints after total knee arthroplasty may suffer from aseptic implant loosening. Current imaging modalities do not quantify looseness of knee arthroplasty components. A recently developed and validated workflow quantifies the tibial component displacement relative to the bone from CT scans acquired under valgus and varus load. The 3D analysis approach includes segmentation and registration of the tibial component and bone. In the current approach, the semi-automatic segmentation requires user interaction, adding complexity to the analysis. The research question is whether the segmentation step can be fully automated while keeping outcomes indifferent.</p><p><strong>Methods: </strong>In this study, different deep-learning (DL) models for fully automatic segmentation are proposed and evaluated. For this, we employ three different datasets for model development (20 cadaveric CT pairs and 10 cadaveric CT scans) and evaluation (72 patient CT pairs). Based on the performance on the development dataset, the final model was selected, and its predictions replaced the semi-automatic segmentation in the current approach. Implant displacement was quantified by the rotation about the screw-axis, maximum total point motion, and mean target registration error.</p><p><strong>Results: </strong>The displacement parameters of the proposed approach showed a statistically significant difference between fixed and loose samples in a cadaver dataset, as well as between asymptomatic and loose samples in a patient dataset, similar to the outcomes of the current approach. The methodological error calculated on a reproducibility dataset showed values that were not statistically significant different between the two approaches. The results of the proposed and current approaches showed excellent reliability for one and three operators on two datasets.</p><p><strong>Conclusion: </strong>The conclusion is that a full automation in knee implant displacement assessment is feasible by utilizing a DL-based segmentation model while maintaining the capability of distinguishing between fixed and loose implants.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"2065-2073"},"PeriodicalIF":2.3,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12518389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144509340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}