Taylor Frantz, Frederick van Gestel, Pieter Slagmolen, Johnny Duerinck, Thierry Scheerlinck, Jef Vandemeulebroucke
{"title":"Evaluation of augmented reality guidance for glenoid pin placement in total shoulder arthroplasty.","authors":"Taylor Frantz, Frederick van Gestel, Pieter Slagmolen, Johnny Duerinck, Thierry Scheerlinck, Jef Vandemeulebroucke","doi":"10.1007/s11548-025-03444-8","DOIUrl":"10.1007/s11548-025-03444-8","url":null,"abstract":"<p><strong>Purpose: </strong>Computer-aided navigation and patient-specific 3D printed guides have demonstrated superior outcomes in total shoulder arthroplasty (TSA). Nevertheless, few TSAs are inserted using these technologies. Head-worn augmented reality (AR) devices can provide intuitive 3D computer navigation to the surgeon. This study investigates AR navigation in conjunction with adaptive spatial drift correction toward TSA.</p><p><strong>Methods: </strong>A phantom study was performed to assess the performance of AR navigated pin placement in TSA. Two medical experts performed a total of 12 pin placements into phantom scapula; six were placed using an end-to-end AR-navigated technique, and six using a common freehand technique. Inside-out infrared (IR) tracking was designed and integrated into the AR headset to correct for device drift and provide tool tracking. Additionally, the impact of IR tool tracking, registration, and superposed/juxtaposed visualization techniques was investigated.</p><p><strong>Results: </strong>The AR-navigated pin placement resulted in a mean entry point error of 1.06 mm ± 0.64 mm and directional error of <math><mrow><mn>1</mn> <mo>.</mo> <msup><mn>66</mn> <mo>∘</mo></msup> <mo>±</mo> <mn>0</mn> <mo>.</mo> <msup><mn>65</mn> <mo>∘</mo></msup> </mrow> </math> . Compared with the freehand technique, AR navigation resulted in improved directional outcomes ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.03</mn></mrow> </math> ), while entry point accuracy was not significantly different ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.44</mn></mrow> </math> ). IR tool tracking error was 1.47 mm ± 0.69 mm and <math><mrow><mn>0</mn> <mo>.</mo> <msup><mn>92</mn> <mo>∘</mo></msup> <mo>±</mo> <mn>0</mn> <mo>.</mo> <msup><mn>50</mn> <mo>∘</mo></msup> </mrow> </math> , and registration error was 4.32 mm ± 1.75 mm and <math><mrow><mn>2</mn> <mo>.</mo> <msup><mn>56</mn> <mo>∘</mo></msup> <mo>±</mo> <mn>0</mn> <mo>.</mo> <msup><mn>82</mn> <mo>∘</mo></msup> </mrow> </math> . No statistical difference between AR visualization techniques was found in entry point ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.22</mn></mrow> </math> ) or directional ( <math><mrow><mi>p</mi> <mo>=</mo> <mn>0.31</mn></mrow> </math> ) errors.</p><p><strong>Conclusion: </strong>AR navigation allowed for comparable pin placement outcomes with those reported in the literature for patient-specific 3D printed guides; moreover, it complements the patient-specific planning without the need for the guides themselves.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1633-1642"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350535/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dexun Zhang, Tianqiao Zhang, Ahmed Elazab, Cong Li, Fucang Jia, Huoling Luo
{"title":"Surgical tooltip localization via concentric nested square markers and depth-RGB multi-coordinate fusion.","authors":"Dexun Zhang, Tianqiao Zhang, Ahmed Elazab, Cong Li, Fucang Jia, Huoling Luo","doi":"10.1007/s11548-025-03456-4","DOIUrl":"10.1007/s11548-025-03456-4","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate tooltip localization is critical in surgical navigation and other high-precision applications. Traditional ArUco marker-based systems often suffer from detection instability and reduced accuracy under varying tool poses. This study proposes a novel real-time localization method based on concentric nested square markers and multi-coordinate frame fusion, improving robustness and spatial accuracy through geometric marker enhancement and data fusion.</p><p><strong>Methods: </strong>The proposed marker integrates an embedded ArUco code within an outer nested square structure, enabling pose estimation from multiple coordinate frames. Tooltip localization is derived by fusing the estimated poses of both structures, with optional depth data incorporated to enhance precision further. A cubic calibration object with ArUco references was used to establish ground-truth positions. Extensive experiments were conducted under varied distances, tool inclination angles, and lighting conditions.</p><p><strong>Results: </strong>The nested square marker achieved up to 40.1% improvement in average localization accuracy compared to standard ArUco markers. Depth fusion further reduced the average error to 1.55 mm and decreased the standard deviation, indicating stronger stability. Additional comparisons with AprilTag, QR Code, and calibration patterns validated the method's superior performance across diverse marker types.</p><p><strong>Conclusion: </strong>The proposed method offers a robust, compact, and accurate localization solution compatible with common camera systems. Its enhanced performance in various tool poses makes it well-suited for real-world surgical scenarios. Source code is available at: https://github.com/xunlizhinian1124/Real-Time-Tool-Track .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1601-1611"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144340633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mostafa Selim, Lars Eisenburger, Tom Dijkhuis, Martijn Van Dam, Alexander Broersen, Douwe Dresscher, Jouke Dijkstra, Momen Abayazid
{"title":"Experimental evaluation of virtual needle insertion framework with enhanced haptic feedback.","authors":"Mostafa Selim, Lars Eisenburger, Tom Dijkhuis, Martijn Van Dam, Alexander Broersen, Douwe Dresscher, Jouke Dijkstra, Momen Abayazid","doi":"10.1007/s11548-025-03420-2","DOIUrl":"10.1007/s11548-025-03420-2","url":null,"abstract":"<p><strong>Purpose: </strong>Haptic feedback could improve the efficiency of needle insertion procedures by providing surgeons with enhanced sensing and guiding capabilities. A framework has been developed to provide physicians with enhanced haptic feedback during CT-guided needle insertion procedures in oncology.</p><p><strong>Methods: </strong>The physicians encountered needle-tissue interaction and guidance forces with visual feedback to accurately reach the tumor. The force feedback to users was enhanced by amplifying several parameters in the feedback model, such as tip forces and radial forces. The study evaluated the effect of multiple haptic and visual feedback algorithms on user performance in efficiently inserting the needle. In this experimental pilot study, 12 participants including three interventional radiologists engaged in five experimental scenarios simulating a needle insertion.</p><p><strong>Results: </strong>The results showed that enhanced force feedback for tumor perception reduced tumor targeting error and trajectory deviation, compared to natural force feedback. This was also the case when tumor perception and haptic guidance were both enhanced. Additionally, real-time visual feedback and enhanced force feedback for guidance reduced the duration to finish the task significantly. Participants still preferred real-time visual feedback over enhanced haptic feedback cues.</p><p><strong>Conclusions: </strong>On average, small tumors (around 3mm in diameter) can be successfully targeted with enhanced haptic feedback in the radial and axial directions. Additionally, critical regions, such as veins within the liver, can be avoided more effectively as users maintain the desired trajectory with greater accuracy.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1643-1652"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350536/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Caballero, Manuel J Pérez-Salazar, Juan A Sánchez-Margallo, Francisco M Sánchez-Margallo
{"title":"Optimization of an artificial neural network for predicting stress in robot-assisted laparoscopic surgery based on EDA sensor data.","authors":"Daniel Caballero, Manuel J Pérez-Salazar, Juan A Sánchez-Margallo, Francisco M Sánchez-Margallo","doi":"10.1007/s11548-025-03399-w","DOIUrl":"10.1007/s11548-025-03399-w","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to optimize tunable hyperparameters of the multilayer perceptron (MLP) setup. The optimization procedure is aimed at more accurately predicting potential health risks to the surgeon during robotic-assisted surgery (RAS).</p><p><strong>Methods: </strong>Data related to physiological parameters (electrodermal activity-EDA, blood pressure and body temperature) were collected during twenty RAS sessions completed by nine surgeons with different levels of experience. Once the dataset was generated, two preprocessing techniques (scaling and normalized) were applied. These datasets were divided into two subsets: with 80% data for training and cross-validation and 20% for testing. MLP was selected as the prediction technique. Three MLP hyperparameters were selected for optimization: number of epochs, learning rate and momentum. A central composite design (CCD) was applied with a full factorial design with five center points, with 31 combinations for each dataset. Once the models were generated on the training dataset, the optimized models were selected and then validated on the cross-validation and test datasets.</p><p><strong>Results: </strong>The optimized models were generated with an optimal number of epochs (500), the most applied learning rate was 0.01 and the most applied momentum was 0.05. These results showed significant improvement for EDA (R<sup>2</sup> = 0.9722), blood pressure (R<sup>2</sup> = 0.9977) and body temperature (R<sup>2</sup> = 0.9941).</p><p><strong>Conclusions: </strong>MLP parameters have been successfully optimized, and the enhanced models were successfully validated on cross-validation and test datasets. This fact invites us to optimize different AI techniques that could improve results in clinical practice.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1665-1675"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144112643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Raphaela Maerkl, Tobias Rueckert, David Rauber, Max Gutbrod, Danilo Weber Nunes, Christoph Palm
{"title":"Enhancing generalization in zero-shot multi-label endoscopic instrument classification.","authors":"Raphaela Maerkl, Tobias Rueckert, David Rauber, Max Gutbrod, Danilo Weber Nunes, Christoph Palm","doi":"10.1007/s11548-025-03439-5","DOIUrl":"10.1007/s11548-025-03439-5","url":null,"abstract":"<p><strong>Purpose: </strong>Recognizing previously unseen classes with neural networks is a significant challenge due to their limited generalization capabilities. This issue is particularly critical in safety-critical domains such as medical applications, where accurate classification is essential for reliability and patient safety. Zero-shot learning methods address this challenge by utilizing additional semantic data, with their performance relying heavily on the quality of the generated embeddings.</p><p><strong>Methods: </strong>This work investigates the use of full descriptive sentences, generated by a Sentence-BERT model, as class representations, compared to simpler category-based word embeddings derived from a BERT model. Additionally, the impact of z-score normalization as a post-processing step on these embeddings is explored. The proposed approach is evaluated on a multi-label generalized zero-shot learning task, focusing on the recognition of surgical instruments in endoscopic images from minimally invasive cholecystectomies.</p><p><strong>Results: </strong>The results demonstrate that combining sentence embeddings and z-score normalization significantly improves model performance. For unseen classes, the AUROC improves from 43.9 % to 64.9 %, and the multi-label accuracy from 26.1 % to 79.5 %. Overall performance measured across both seen and unseen classes improves from 49.3 % to 64.9 % in AUROC and from 37.3 % to 65.1 % in multi-label accuracy, highlighting the effectiveness of our approach.</p><p><strong>Conclusion: </strong>These findings demonstrate that sentence embeddings and z-score normalization can substantially enhance the generalization performance of zero-shot learning models. However, as the study is based on a single dataset, future work should validate the method across diverse datasets and application domains to establish its robustness and broader applicability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1577-1587"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144267864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Homology-feature-assisted quantification of fibrotic lesions in computed tomography images: a proof of concept for CT image feature-based prediction for gene-expression-distribution.","authors":"Kentaro Doi, Hodaka Numasaki, Yusuke Anetai, Yayoi Natsume-Kitatani","doi":"10.1007/s11548-025-03428-8","DOIUrl":"10.1007/s11548-025-03428-8","url":null,"abstract":"<p><strong>Purpose: </strong>Computed tomography (CT) image is promising for diagnosing of interstitial idiopathic pneumonias (IIPs); however, quantification of IIPs lesions in CT images is required. This study aimed to quantitatively evaluate fibrotic lesions in CT images using homology-based image analysis.</p><p><strong>Methods: </strong>We collected publicly available CT images comprising 47 fibrotic images and 36 non-fibrotic images. The homology-profile (HP) image analysis method provides b0 and b1 profiles, indicating the number of isolated components and holes in a binary image. We locally applied the HP method to the CT image and generated homology-based feature (HF) maps as resultant images. The collected images were randomly divided into the tuning dataset and the testing dataset. The cut-off value for classifying the HF map for fibrotic or non-fibrotic images was defined using receiver operating characteristic (ROC) analysis with the tuning dataset. This cut-off value was evaluated using the testing dataset with accuracy, sensitivity, specificity, and precision.</p><p><strong>Results: </strong>We successfully visualized the quantification of fibrotic lesions in the HF map. The b0 HF map was more suitable for quantifying fibrotic lesions than b1. The mean cut-off value of the b0 HF map was 199, with all performances achieved at 1.0. Furthermore, the classification of the b0 HF map for fibrotic or lung cancer images achieved all maximum performances at 1.0.</p><p><strong>Conclusion: </strong>This study demonstrated the feasibility of using the HF in quantitatively evaluating fibrotic lesions in CT images. Our proposed HP-based method can also be promising in quantifying the fibrotic lesions of patients with IIPs, which can be applicable to assist the diagnosis of IIPs.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1703-1711"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350597/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144163421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pietro Leoncini, Francesco Marzola, Matteo Pescio, Maura Casadio, Alberto Arezzo, Giulio Dagnino
{"title":"A reproducible framework for synthetic data generation and instance segmentation in robotic suturing.","authors":"Pietro Leoncini, Francesco Marzola, Matteo Pescio, Maura Casadio, Alberto Arezzo, Giulio Dagnino","doi":"10.1007/s11548-025-03460-8","DOIUrl":"10.1007/s11548-025-03460-8","url":null,"abstract":"<p><strong>Purpose: </strong>Automating suturing in robotic-assisted surgery offers significant benefits including enhanced precision, reduced operative time, and alleviated surgeon fatigue. Achieving this requires robust computer vision (CV) models. Still, their development is hindered by the scarcity of task-specific datasets and the complexity of acquiring and annotating real surgical data. This work addresses these challenges using a sim-to-real approach to create synthetic datasets and a data-driven methodology for model training and evaluation.</p><p><strong>Methods: </strong>Existing 3D models of Da Vinci tools were modified and new models-needle and tissue cuts-were created to account for diverse data scenarios, enabling the generation of three synthetic datasets with increasing realism using Unity and the Perception package. These datasets were then employed to train several YOLOv8-m models for object detection to evaluate the generalizability of synthetic-trained models in real scenarios and the impact of dataset realism on model performance. Additionally, a real-time instance segmentation model was developed through a hybrid training strategy combining synthetic and a minimal set of real images.</p><p><strong>Results: </strong>Synthetic-trained models showed improved performance on real test sets as training dataset realism increased, but realism levels remained insufficient for complete generalization. Instead, the hybrid approach significantly increased performance in real scenarios. Indeed, the hybrid instance segmentation model exhibited real-time capabilities and robust accuracy, achieving the best Dice coefficient (0.92) with minimal dependence on real training data (30-50 images).</p><p><strong>Conclusions: </strong>This study demonstrates the potential of sim-to-real synthetic datasets to advance robotic suturing automation through a simple and reproducible framework. By sharing 3D models, Unity environments and annotated datasets, this work provides resources for creating additional images, expanding datasets, and enabling fine-tuning or semi-supervised learning. By facilitating further exploration, this work lays a foundation for advancing suturing automation and addressing task-specific dataset scarcity.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1567-1576"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring interaction paradigms for segmenting medical images in virtual reality.","authors":"Zachary Jones, Simon Drouin, Marta Kersten-Oertel","doi":"10.1007/s11548-025-03424-y","DOIUrl":"10.1007/s11548-025-03424-y","url":null,"abstract":"<p><strong>Purpose: </strong>Virtual reality (VR) can offer immersive platforms for segmenting complex medical images to facilitate a better understanding of anatomical structures for training, diagnosis, surgical planning, and treatment evaluation. These applications rely on user interaction within the VR environment to manipulate and interpret medical data. However, the optimal interaction schemes and input devices for segmentation tasks in VR remain unclear. This study compares user performance and experience using two different input schemes.</p><p><strong>Methods: </strong>Twelve participants segmented 6 CT/MRI images using two input methods: keyboard and mouse (KBM) and motion controllers (MCs). Performance was assessed using accuracy, completion time, and efficiency. A post-task questionnaire measured users' perceived performance and experience.</p><p><strong>Results: </strong>No significant overall time difference was observed between the two input methods, though KBM was faster for larger segmentation tasks. Accuracy was consistent across input schemes. Participants rated both methods as equally challenging, with similar efficiency levels, but found MCs more enjoyable to use.</p><p><strong>Conclusion: </strong>These findings suggest that VR segmentation software should support flexible input options tailored to task complexity. Future work should explore enhancements to motion controller interfaces to improve usability and user experience.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1713-1721"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144121368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tina Nomena Herimino Nantenaina, Andrey Titov, Sung-Joo Yuh, Simon Drouin
{"title":"Evaluating virtual reality as a tool for improving surgical planning in spinal tumors.","authors":"Tina Nomena Herimino Nantenaina, Andrey Titov, Sung-Joo Yuh, Simon Drouin","doi":"10.1007/s11548-025-03440-y","DOIUrl":"10.1007/s11548-025-03440-y","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical planning is essential before the surgery, especially for spinal tumors resection. During the surgical planning, medical images are analyzed by surgeon on a standardized display mode using a computer. But this display mode has its limit in term of spatial perception of the anatomical structures. Our purpose in this study is to assess the impact of using another display mode like virtual reality (VR) on the surgical planning of spinal tumors resection by comparing VR with conventional computer-based visualization.</p><p><strong>Methods: </strong>A user study was conducted with eight neurosurgeons, who planned six spinal tumor surgeries using both VR and computer visualization modalities. The evaluation focused on the perception of anatomical-functional information from medical images, the identification of anatomical structures, and the selection of surgical approaches represented by the number of anatomical structures traversed to attend the tumor. These parameters were assessed using objective questionnaires developed from a work domain analysis (WDA) already proved in brain surgery. We then adapted the WDA to spinal surgery.</p><p><strong>Results: </strong>VR made it easier to perceive a greater number of anatomical-functional information compared to computer visualization. Surgeons identified a greater anatomical structure with VR compared to computer visualization. Furthermore, surgeons selected additional anatomical structures to be traversed to reach the tumor when using VR, leading to a more precise selection of surgical approaches. These findings can predict the added value of VR in helping surgical decision-making when planning surgery.</p><p><strong>Conclusion: </strong>VR can be a promising tool for surgical planning by providing an immersive and interactive perspective that enhances understanding of anatomy. However, our finding is from an exploratory study, more clinical cases should be conducted to demonstrate its feasibility and reliability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1677-1687"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144227501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Clara Tomasini, Javier Rodriguez-Puigvert, Dinora Polanco, Manuel Viñuales, Luis Riazuelo, Ana C Murillo
{"title":"Automated vision-based assistance tools in bronchoscopy: stenosis severity estimation.","authors":"Clara Tomasini, Javier Rodriguez-Puigvert, Dinora Polanco, Manuel Viñuales, Luis Riazuelo, Ana C Murillo","doi":"10.1007/s11548-025-03398-x","DOIUrl":"10.1007/s11548-025-03398-x","url":null,"abstract":"<p><strong>Purpose: </strong>Subglottic stenosis refers to the narrowing of the subglottis, the airway between the vocal cords and the trachea. Its severity is typically evaluated by estimating the percentage of obstructed airway. This estimation can be obtained from CT data or through visual inspection by experts exploring the region. However, visual inspections are inherently subjective, leading to less consistent and robust diagnoses. No public methods or datasets are currently available for automated evaluation of this condition from bronchoscopy video.</p><p><strong>Methods: </strong>We propose a pipeline for automated subglottic stenosis severity estimation during the bronchoscopy exploration, without requiring the physician to traverse the stenosed region. Our approach exploits the physical effect of illumination decline in endoscopy to segment and track the lumen and obtain a 3D model of the airway. This 3D model is obtained from a single frame and is used to measure the airway narrowing.</p><p><strong>Results: </strong>Our pipeline is the first to enable automated and robust subglottic stenosis severity measurement using bronchoscopy images. The results show consistency with ground-truth estimations from CT scans and expert estimations and reliable repeatability across multiple estimations on the same patient. Our evaluation is performed on our new Subglottic Stenosis Dataset of real bronchoscopy procedures data.</p><p><strong>Conclusion: </strong>We demonstrate how to automate evaluation of subglottic stenosis severity using only bronchoscopy. Our approach can assist with and shorten diagnosis and monitoring procedures, with automated and repeatable estimations and less exploration time, and save radiation exposure to patients as no CT is required. Additionally, we release the first public benchmark for subglottic stenosis severity assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1733-1740"},"PeriodicalIF":2.3,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12350464/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}