Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth
{"title":"Reinforcement learning for safe autonomous two-device navigation of cerebral vessels in mechanical thrombectomy.","authors":"Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth","doi":"10.1007/s11548-025-03339-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03339-8","url":null,"abstract":"<p><strong>Purpose: </strong>Autonomous systems in mechanical thrombectomy (MT) hold promise for reducing procedure times, minimizing radiation exposure, and enhancing patient safety. However, current reinforcement learning (RL) methods only reach the carotid arteries, are not generalizable to other patient vasculatures, and do not consider safety. We propose a safe dual-device RL algorithm that can navigate beyond the carotid arteries to cerebral vessels.</p><p><strong>Methods: </strong>We used the Simulation Open Framework Architecture to represent the intricacies of cerebral vessels, and a modified Soft Actor-Critic RL algorithm to learn, for the first time, the navigation of micro-catheters and micro-guidewires. We incorporate patient safety metrics into our reward function by integrating guidewire tip forces. Inverse RL is used with demonstrator data on 12 patient-specific vascular cases.</p><p><strong>Results: </strong>Our simulation demonstrates successful autonomous navigation within unseen cerebral vessels, achieving a 96% success rate, 7.0 s procedure time, and 0.24 N mean forces, well below the proposed 1.5 N vessel rupture threshold.</p><p><strong>Conclusion: </strong>To the best of our knowledge, our proposed autonomous system for MT two-device navigation reaches cerebral vessels, considers safety, and is generalizable to unseen patient-specific cases for the first time. We envisage future work will extend the validation to vasculatures of different complexity and on in vitro models. While our contributions pave the way toward deploying agents in clinical settings, safety and trustworthiness will be crucial elements to consider when proposing new methodology.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhao Zhai, Zhen Chen, Xingjian Luo, Zhi Zheng, Haiqiao Zhang, Xi Wang, Xiaosheng Yan, Xiaoye Liu, Jie Yin, Jinqiao Wang, Jun Zhang
{"title":"Generation of surgical reports for lymph node dissection during laparoscopic gastric cancer surgery based on artificial intelligence.","authors":"Yuhao Zhai, Zhen Chen, Xingjian Luo, Zhi Zheng, Haiqiao Zhang, Xi Wang, Xiaosheng Yan, Xiaoye Liu, Jie Yin, Jinqiao Wang, Jun Zhang","doi":"10.1007/s11548-025-03345-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03345-w","url":null,"abstract":"<p><strong>Purpose: </strong>This study aimed to develop an artificial intelligence (AI) model for the surgical report output of laparoscopic lymph node dissection in the suprapancreatic region during gastric cancer surgery.</p><p><strong>Methods: </strong>Patients who underwent laparoscopic radical resection for gastric cancer were included in this study, and their surgical videos were analyzed. The videos were recorded from the opening of the gastropancreatic fold as the starting point to the transection of the left gastric artery as the endpoint, with the video frame rate set to 1 frame per second. All surgical procedures were recorded following the principle of tool-tissue interaction, with annotations completed by an experienced surgeon and reviewed by a senior surgeon. The final annotated surgical videos were used as inputs for the AI model to generate the surgical report output.</p><p><strong>Results: </strong>A total of 100 patients who underwent laparoscopic surgery for gastric cancer were included. A Surgical Concept Alignment Network was used as the model for surgical report output. The average number of frames in the videos was 728.71, with the grasping forceps being the most frequently used instrument. The AI model successfully generated a surgical video report output, achieving a BLEU-4 score of 0.7377, METEOR score of 0.4846, and ROUGE-L score of 0.7953.</p><p><strong>Conclusion: </strong>The AI model demonstrates its capability in producing surgical report output for laparoscopic lymph node dissection in the suprapancreatic region during gastric cancer surgery. This model serves as a valuable tool in clinical diagnosis, treatment, and training.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subhadeep Koley, Abdolrahim Kadkhodamohammadi, Santiago Barbarisi, Danail Stoyanov, Imanol Luengo
{"title":"Training-free temporal object tracking in surgical videos.","authors":"Subhadeep Koley, Abdolrahim Kadkhodamohammadi, Santiago Barbarisi, Danail Stoyanov, Imanol Luengo","doi":"10.1007/s11548-025-03349-6","DOIUrl":"https://doi.org/10.1007/s11548-025-03349-6","url":null,"abstract":"<p><strong>Purpose: </strong>In this paper, we present a novel approach for online object tracking in laparoscopic cholecystectomy (LC) surgical videos, targeting localisation and tracking of critical anatomical structures and instruments. Our method addresses the challenges of costly pixel-level annotations and label inconsistencies inherent in existing datasets.</p><p><strong>Methods: </strong>Leveraging the inherent object localisation capabilities of pre-trained text-to-image diffusion models, we extract representative features from surgical frames without any training or fine-tuning. Our tracking framework uses these features, along with cross-frame interactions via an affinity matrix inspired by query-key-value attention, to ensure temporal continuity in the tracking process.</p><p><strong>Results: </strong>Through a pilot study, we first demonstrate that diffusion features exhibit superior object localisation and consistent semantics across different decoder levels and temporal frames. Later, we perform extensive experiments to validate the effectiveness of our approach, showcasing its superiority over competitors for the task of temporal object tracking. Specifically, we achieve a per-pixel classification accuracy of <math><mrow><mn>79.19</mn> <mo>%</mo></mrow> </math> , mean Jaccard score of <math><mrow><mn>56.20</mn> <mo>%</mo></mrow> </math> , and mean F-score of <math><mrow><mn>79.48</mn> <mo>%</mo></mrow> </math> on the publicly available CholeSeg8K dataset.</p><p><strong>Conclusion: </strong>Our work not only introduces a novel application of text-to-image diffusion models but also contributes to advancing the field of surgical video analysis, offering a promising avenue for accurate and cost-effective temporal object tracking in minimally invasive surgery videos.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Wehrli, Alicia Durrer, Paul Friedrich, Volodimir Buchakchiyskiy, Marcus Mumme, Edwin Li, Gyozo Lehoczky, Carol C Hasler, Philippe C Cattin
{"title":"Generating 3D pseudo-healthy knee MR images to support trochleoplasty planning.","authors":"Michael Wehrli, Alicia Durrer, Paul Friedrich, Volodimir Buchakchiyskiy, Marcus Mumme, Edwin Li, Gyozo Lehoczky, Carol C Hasler, Philippe C Cattin","doi":"10.1007/s11548-025-03343-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03343-y","url":null,"abstract":"<p><strong>Purpose: </strong>Trochlear dysplasia (TD) is a common malformation in adolescents, leading to anterior knee pain and instability. Surgical interventions such as trochleoplasty require precise planning to correct the trochlear groove. However, no standardized preoperative plan exists to guide surgeons in reshaping the femur. This study aims to generate patient-specific, pseudo-healthy MR images of the trochlear region that should theoretically align with the respective patient's patella, potentially supporting the preoperative planning of trochleoplasty.</p><p><strong>Methods: </strong>We employ a wavelet diffusion model (WDM) to generate personalized pseudo-healthy, anatomically plausible MR scans of the trochlear region. We train our model using knee MR scans of healthy subjects. During inference, we mask out pathological regions around the patella in scans of patients affected by TD and replace them with their pseudo-healthy counterpart. An orthopedic surgeon measured the sulcus angle (SA), trochlear groove depth (TGD) and Déjour classification in MR scans before and after inpainting. The code is available at https://github.com/wehrlimi/Generate-Pseudo-Healthy-Knee-MRI .</p><p><strong>Results: </strong>The inpainting by our model significantly improves the SA, TGD and Déjour classification in a study with 49 knee MR scans.</p><p><strong>Conclusion: </strong>This study demonstrates the potential of WDMs in providing surgeons with patient-specific guidance. By offering anatomically plausible MR scans, the method could potentially enhance the precision and preoperative planning of trochleoplasty and pave the way to more minimally invasive surgeries.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enzo Kerkhof, Abdullah Thabit, Mohamed Benmahdjoub, Pierre Ambrosini, Tessa van Ginhoven, Eppo B Wolvius, Theo van Walsum
{"title":"Depth-based registration of 3D preoperative models to intraoperative patient anatomy using the HoloLens 2.","authors":"Enzo Kerkhof, Abdullah Thabit, Mohamed Benmahdjoub, Pierre Ambrosini, Tessa van Ginhoven, Eppo B Wolvius, Theo van Walsum","doi":"10.1007/s11548-025-03328-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03328-x","url":null,"abstract":"<p><strong>Purpose: </strong>In augmented reality (AR) surgical navigation, a registration step is required to align the preoperative data with the patient. This work investigates the use of the depth sensor of HoloLens 2 for registration in surgical navigation.</p><p><strong>Methods: </strong>An AR depth-based registration framework was developed. The framework aligns preoperative and intraoperative point clouds and overlays the preoperative model on the patient. For evaluation, three experiments were conducted. First, the accuracy of the HoloLens's depth sensor was evaluated for both Long-Throw (LT) and Articulated Hand Tracking (AHAT) modes. Second, the overall registration accuracy was assessed with different alignment approaches. The accuracy and success rate of each approach were evaluated. Finally, a qualitative assessment of the framework was performed on various objects.</p><p><strong>Results: </strong>The depth accuracy experiment showed mean overestimation errors of 5.7 mm for AHAT and 9.0 mm for LT. For the overall alignment, the mean translation errors of the different methods ranged from 12.5 to 17.0 mm, while rotation errors ranged from 0.9 to 1.1 degrees.</p><p><strong>Conclusion: </strong>The results show that the depth sensor on the HoloLens 2 can be used for image-to-patient alignment with 1-2 cm accuracy and within 4 s, indicating that with further improvement in the accuracy, this approach can offer a convenient alternative to other time-consuming marker-based approaches. This work provides a generic marker-less registration framework using the depth sensor of the HoloLens 2, with extensive analysis of the sensor's reconstruction and registration accuracy. It supports advancing the research of marker-less registration in surgical navigation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas De Carvalho, Rawen Kader, Patrick Brandao, Laurence B Lovat, Peter Mountney, Danail Stoyanov
{"title":"NICE polyp feature classification for colonoscopy screening.","authors":"Thomas De Carvalho, Rawen Kader, Patrick Brandao, Laurence B Lovat, Peter Mountney, Danail Stoyanov","doi":"10.1007/s11548-025-03338-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03338-9","url":null,"abstract":"<p><strong>Purpose: </strong>Colorectal cancer is one of the most prevalent cancers worldwide, highlighting the critical need for early and accurate diagnosis to reduce patient risks. Inaccurate diagnoses not only compromise patient outcomes but also lead to increased costs and additional time burdens for clinicians. Enhancing diagnostic accuracy is essential, and this study focuses on improving the accuracy of polyp classification using the NICE classification, which evaluates three key features: colour, vessels, and surface pattern.</p><p><strong>Methods: </strong>A multiclass classifier was developed and trained to independently classify each of the three features in the NICE classification. The approach prioritizes clinically relevant features rather than relying on handcrafted or obscure deep learning features, ensuring transparency and reliability for clinical use. The classifier was trained on internal datasets and tested on both internal and public datasets.</p><p><strong>Results: </strong>The classifier successfully classified the three polyp features, achieving an accuracy of over 92% on internal datasets and exceeding 88% on a public dataset. The high classification accuracy demonstrates the system's effectiveness in identifying the key features from the NICE classification.</p><p><strong>Conclusion: </strong>This study underscores the potential of using an independent classification approach for NICE features to enhance clinical decision-making in colorectal cancer diagnosis. The method shows promise in improving diagnostic accuracy, which could lead to better patient outcomes and more efficient clinical workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robotic CBCT meets robotic ultrasound.","authors":"Feng Li, Yuan Bi, Dianye Huang, Zhongliang Jiang, Nassir Navab","doi":"10.1007/s11548-025-03336-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03336-x","url":null,"abstract":"<p><strong>Purpose: </strong>The multi-modality imaging system offers optimal fused images for safe and precise interventions in modern clinical practices, such as computed tomography-ultrasound (CT-US) guidance for needle insertion. However, the limited dexterity and mobility of current imaging devices hinder their integration into standardized workflows and the advancement toward fully autonomous intervention systems. In this paper, we present a novel clinical setup where robotic cone beam computed tomography (CBCT) and robotic US are pre-calibrated and dynamically co-registered, enabling new clinical applications. This setup allows registration-free rigid registration, facilitating multi-modal guided procedures in the absence of tissue deformation.</p><p><strong>Methods: </strong>First, a one-time pre-calibration is performed between the systems. To ensure a safe insertion path by highlighting critical vasculature on the 3D CBCT, SAM2 segments vessels from B-mode images, using the Doppler signal as an autonomously generated prompt. Based on the registration, the Doppler image or segmented vessel masks are then mapped onto the CBCT, creating an optimally fused image with comprehensive detail. To validate the system, we used a specially designed phantom, featuring lesions covered by ribs and multiple vessels with simulated moving flow.</p><p><strong>Results: </strong>The mapping error between US and CBCT resulted in an average deviation of <math><mrow><mn>1.72</mn> <mo>±</mo> <mn>0.62</mn></mrow> </math> mm. A user study demonstrated the effectiveness of CBCT-US fusion for needle insertion guidance, showing significant improvements in time efficiency, accuracy, and success rate. Needle intervention performance improved by approximately 50% compared to the conventional US-guided workflow.</p><p><strong>Conclusion: </strong>We present the first robotic dual-modality imaging system designed to guide clinical applications. The results show significant performance improvements compared to traditional manual interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic diagnosis of abdominal pathologies in untrimmed ultrasound videos.","authors":"Güinther Saibro, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Michele Diana, Alexandre Hostettler, Toby Collins","doi":"10.1007/s11548-025-03334-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03334-z","url":null,"abstract":"<p><strong>Purpose: </strong>Despite major advances in Computer Assisted Diagnosis (CAD), the need for carefully labeled training data remains an important clinical translation barrier. This work aims to overcome this barrier for ultrasound video-based CAD, using video-level classification labels combined with a novel training strategy to improve the generalization performance of state-of-the-art (SOTA) video classifiers.</p><p><strong>Methods: </strong>SOTA video classifiers were trained and evaluated on a novel ultrasound video dataset of liver and kidney pathologies, and they all struggled to generalize, especially for kidney pathologies. A new training strategy is presented, wherein a frame relevance assessor is trained to score the video frames in a video by diagnostic relevance. This is used to automatically generate diagnostically-relevant video clips (DR-Clips), which guide a video classifier during training and inference.</p><p><strong>Results: </strong>Using DR-Clips with a Video Swin Transformer, we achieved a 0.92 ROC-AUC for kidney pathology detection in videos, compared to 0.72 ROC-AUC with a Swin Transformer and standard video clips. For liver steatosis detection, due to the diffuse nature of the pathology, the Video Swin Transformer, and other video classifiers, performed similarly well, generally exceeding a 0.92 ROC-AUC.</p><p><strong>Conclusion: </strong>In theory, video classifiers, such as video transformers, should be able to solve ultrasound CAD tasks with video labels. However, in practice, video labels provide weaker supervision compared to image labels, resulting in worse generalization, as demonstrated. The additional frame guidance provided by DR-Clips enhances performance significantly. The results highlight current limits and opportunities to improve frame guidance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sreekanth Arikatla, Sadhana Ravikumar, Raymond White, Tung Nguyen, Beatriz Paniagua
{"title":"High-fidelity surgical simulator for the performance of craniofacial osteotomies.","authors":"Sreekanth Arikatla, Sadhana Ravikumar, Raymond White, Tung Nguyen, Beatriz Paniagua","doi":"10.1007/s11548-024-03297-7","DOIUrl":"10.1007/s11548-024-03297-7","url":null,"abstract":"<p><strong>Purpose: </strong>The oral and maxillofacial (OMF) surgical community is making an active effort to develop new approaches for surgical training in order to compensate for work-hour restrictions, mitigate differences between training standards, and improve the efficiency of learning while minimizing the risks for the patients. Simulation-based learning, a technology adopted in other training paradigms, has the potential to enhance surgeons' knowledge and psychomotor skills.</p><p><strong>Methods: </strong>We developed a fully immersive, high-fidelity virtual simulation trainer system based on Kitware's open-source visualization and interactive simulation libraries: the Interactive Medical Simulation Toolkit (iMSTK) and the Visualization Toolkit (VTK). This system allows surgeons to train for the crucial osteotomy step in bilateral sagittal split osteotomy (BSSO) using a pen-grasp oscillating saw that is controlled in the virtual environment using a 3D Systems Geomagic Touch haptic device. The simulator incorporates a proficiency-based progression evaluation system to assess the correctness of the cut and provide user feedback.</p><p><strong>Results: </strong>Three expert clinicians and two senior residents tested our pilot simulator to evaluate how the developed system compares to the performance of real-life surgery. The outcomes of the face and content validation study showed promising results with respect to the quality of the simulated images and the force feedback response they obtained from the device matched what they expected to feel.</p><p><strong>Conclusion: </strong>The developed trainer has the potential to contribute to a reduction in the prevalence of adverse surgical outcomes after OMF surgeries involving osteotomies. Observing the clinicians and talking through some of the difficulties helped us identify key areas for improvement. Future work will focus on further clinical evaluation for the BSSO surgical scenario and extension of the trainer to include other craniofacial osteotomy procedures.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"535-543"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Carlos Rivera Monroy, Leonhard Rist, Christian Ostalecki, Andreas Bauer, Julio Vera, Katharina Breininger, Andreas Maier
{"title":"Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic features.","authors":"Luis Carlos Rivera Monroy, Leonhard Rist, Christian Ostalecki, Andreas Bauer, Julio Vera, Katharina Breininger, Andreas Maier","doi":"10.1007/s11548-024-03277-x","DOIUrl":"10.1007/s11548-024-03277-x","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigates the application of Radiomic features within graph neural networks (GNNs) for the classification of multiple-epitope-ligand cartography (MELC) pathology samples. It aims to enhance the diagnosis of often misdiagnosed skin diseases such as eczema, lymphoma, and melanoma. The novel contribution lies in integrating Radiomic features with GNNs and comparing their efficacy against traditional multi-stain profiles.</p><p><strong>Methods: </strong>We utilized GNNs to process multiple pathological slides as cell-level graphs, comparing their performance with XGBoost and Random Forest classifiers. The analysis included two feature types: multi-stain profiles and Radiomic features. Dimensionality reduction techniques such as UMAP and t-SNE were applied to optimize the feature space, and graph connectivity was based on spatial and feature closeness.</p><p><strong>Results: </strong>Integrating Radiomic features into spatially connected graphs significantly improved classification accuracy over traditional models. The application of UMAP further enhanced the performance of GNNs, particularly in classifying diseases with similar pathological features. The GNN model outperformed baseline methods, demonstrating its robustness in handling complex histopathological data.</p><p><strong>Conclusion: </strong>Radiomic features processed through GNNs show significant promise for multi-disease classification, improving diagnostic accuracy. This study's findings suggest that integrating advanced imaging analysis with graph-based modeling can lead to better diagnostic tools. Future research should expand these methods to a wider range of diseases to validate their generalizability and effectiveness.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"497-505"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}