International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic features. 多染色病理成像中的图神经网络:Radiomic 特征的扩展比较分析。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-10-07 DOI: 10.1007/s11548-024-03277-x
Luis Carlos Rivera Monroy, Leonhard Rist, Christian Ostalecki, Andreas Bauer, Julio Vera, Katharina Breininger, Andreas Maier
{"title":"Graph neural networks in multi-stained pathological imaging: extended comparative analysis of Radiomic features.","authors":"Luis Carlos Rivera Monroy, Leonhard Rist, Christian Ostalecki, Andreas Bauer, Julio Vera, Katharina Breininger, Andreas Maier","doi":"10.1007/s11548-024-03277-x","DOIUrl":"10.1007/s11548-024-03277-x","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigates the application of Radiomic features within graph neural networks (GNNs) for the classification of multiple-epitope-ligand cartography (MELC) pathology samples. It aims to enhance the diagnosis of often misdiagnosed skin diseases such as eczema, lymphoma, and melanoma. The novel contribution lies in integrating Radiomic features with GNNs and comparing their efficacy against traditional multi-stain profiles.</p><p><strong>Methods: </strong>We utilized GNNs to process multiple pathological slides as cell-level graphs, comparing their performance with XGBoost and Random Forest classifiers. The analysis included two feature types: multi-stain profiles and Radiomic features. Dimensionality reduction techniques such as UMAP and t-SNE were applied to optimize the feature space, and graph connectivity was based on spatial and feature closeness.</p><p><strong>Results: </strong>Integrating Radiomic features into spatially connected graphs significantly improved classification accuracy over traditional models. The application of UMAP further enhanced the performance of GNNs, particularly in classifying diseases with similar pathological features. The GNN model outperformed baseline methods, demonstrating its robustness in handling complex histopathological data.</p><p><strong>Conclusion: </strong>Radiomic features processed through GNNs show significant promise for multi-disease classification, improving diagnostic accuracy. This study's findings suggest that integrating advanced imaging analysis with graph-based modeling can lead to better diagnostic tools. Future research should expand these methods to a wider range of diseases to validate their generalizability and effectiveness.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"497-505"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929635/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142382320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High-fidelity surgical simulator for the performance of craniofacial osteotomies. 用于颅面截骨术的高保真外科模拟器。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-12-11 DOI: 10.1007/s11548-024-03297-7
Sreekanth Arikatla, Sadhana Ravikumar, Raymond White, Tung Nguyen, Beatriz Paniagua
{"title":"High-fidelity surgical simulator for the performance of craniofacial osteotomies.","authors":"Sreekanth Arikatla, Sadhana Ravikumar, Raymond White, Tung Nguyen, Beatriz Paniagua","doi":"10.1007/s11548-024-03297-7","DOIUrl":"10.1007/s11548-024-03297-7","url":null,"abstract":"<p><strong>Purpose: </strong>The oral and maxillofacial (OMF) surgical community is making an active effort to develop new approaches for surgical training in order to compensate for work-hour restrictions, mitigate differences between training standards, and improve the efficiency of learning while minimizing the risks for the patients. Simulation-based learning, a technology adopted in other training paradigms, has the potential to enhance surgeons' knowledge and psychomotor skills.</p><p><strong>Methods: </strong>We developed a fully immersive, high-fidelity virtual simulation trainer system based on Kitware's open-source visualization and interactive simulation libraries: the Interactive Medical Simulation Toolkit (iMSTK) and the Visualization Toolkit (VTK). This system allows surgeons to train for the crucial osteotomy step in bilateral sagittal split osteotomy (BSSO) using a pen-grasp oscillating saw that is controlled in the virtual environment using a 3D Systems Geomagic Touch haptic device. The simulator incorporates a proficiency-based progression evaluation system to assess the correctness of the cut and provide user feedback.</p><p><strong>Results: </strong>Three expert clinicians and two senior residents tested our pilot simulator to evaluate how the developed system compares to the performance of real-life surgery. The outcomes of the face and content validation study showed promising results with respect to the quality of the simulated images and the force feedback response they obtained from the device matched what they expected to feel.</p><p><strong>Conclusion: </strong>The developed trainer has the potential to contribute to a reduction in the prevalence of adverse surgical outcomes after OMF surgeries involving osteotomies. Observing the clinicians and talking through some of the difficulties helped us identify key areas for improvement. Future work will focus on further clinical evaluation for the BSSO surgical scenario and extension of the trainer to include other craniofacial osteotomy procedures.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"535-543"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-modal dataset creation for federated learning with DICOM-structured reports. 使用dicom结构报告创建用于联邦学习的多模态数据集。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2025-02-03 DOI: 10.1007/s11548-025-03327-y
Malte Tölle, Lukas Burger, Halvar Kelm, Florian André, Peter Bannas, Gerhard Diller, Norbert Frey, Philipp Garthe, Stefan Groß, Anja Hennemuth, Lars Kaderali, Nina Krüger, Andreas Leha, Simon Martin, Alexander Meyer, Eike Nagel, Stefan Orwat, Clemens Scherer, Moritz Seiffert, Jan Moritz Seliger, Stefan Simm, Tim Friede, Tim Seidler, Sandy Engelhardt
{"title":"Multi-modal dataset creation for federated learning with DICOM-structured reports.","authors":"Malte Tölle, Lukas Burger, Halvar Kelm, Florian André, Peter Bannas, Gerhard Diller, Norbert Frey, Philipp Garthe, Stefan Groß, Anja Hennemuth, Lars Kaderali, Nina Krüger, Andreas Leha, Simon Martin, Alexander Meyer, Eike Nagel, Stefan Orwat, Clemens Scherer, Moritz Seiffert, Jan Moritz Seliger, Stefan Simm, Tim Friede, Tim Seidler, Sandy Engelhardt","doi":"10.1007/s11548-025-03327-y","DOIUrl":"10.1007/s11548-025-03327-y","url":null,"abstract":"<p><p>Purpose Federated training is often challenging on heterogeneous datasets due to divergent data storage options, inconsistent naming schemes, varied annotation procedures, and disparities in label quality. This is particularly evident in the emerging multi-modal learning paradigms, where dataset harmonization including a uniform data representation and filtering options are of paramount importance.Methods DICOM-structured reports enable the standardized linkage of arbitrary information beyond the imaging domain and can be used within Python deep learning pipelines with highdicom. Building on this, we developed an open platform for data integration with interactive filtering capabilities, thereby simplifying the process of creation of patient cohorts over several sites with consistent multi-modal data.Results In this study, we extend our prior work by showing its applicability to more and divergent data types, as well as streamlining datasets for federated training within an established consortium of eight university hospitals in Germany. We prove its concurrent filtering ability by creating harmonized multi-modal datasets across all locations for predicting the outcome after minimally invasive heart valve replacement. The data include imaging and waveform data (i.e., computed tomography images, electrocardiography scans) as well as annotations (i.e., calcification segmentations, and pointsets), and metadata (i.e., prostheses and pacemaker dependency).Conclusion Structured reports bridge the traditional gap between imaging systems and information systems. Utilizing the inherent DICOM reference system arbitrary data types can be queried concurrently to create meaningful cohorts for multi-centric data analysis. The graphical interface as well as example structured report templates are available at https://github.com/Cardio-AI/fl-multi-modal-dataset-creation .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"485-495"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929732/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143081992","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated assessment of non-technical skills by heart-rate data. 通过心率数据自动评估非技术技能。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-11-04 DOI: 10.1007/s11548-024-03287-9
Arnaud Huaulmé, Alexandre Tronchot, Hervé Thomazeau, Pierre Jannin
{"title":"Automated assessment of non-technical skills by heart-rate data.","authors":"Arnaud Huaulmé, Alexandre Tronchot, Hervé Thomazeau, Pierre Jannin","doi":"10.1007/s11548-024-03287-9","DOIUrl":"10.1007/s11548-024-03287-9","url":null,"abstract":"<p><strong>Purpose: </strong>Observer-based scoring systems, or automatic methods, based on features or kinematic data analysis, are used to perform surgical skill assessments. These methods have several limitations, observer-based ones are subjective, and the automatic ones mainly focus on technical skills or use data strongly related to technical skills to assess non-technical skills. In this study, we are exploring the use of heart-rate data, a non-technical-related data, to predict values of an observer-based scoring system thanks to random forest regressors.</p><p><strong>Methods: </strong>Heart-rate data from 35 junior resident orthopedic surgeons were collected during the evaluation of a meniscectomy performed on a bench-top simulator. Each participant has been evaluated by two assessors using the Arthroscopic Surgical Skill Evaluation Tool (ASSET) score. A preprocessing stage on heart-rate data, composed of threshold filtering and a detrending method, was considered before extracting 41 features. Then a random forest regressor has been optimized thanks to a randomized search cross-validation strategy to predict each score component.</p><p><strong>Results: </strong>The prediction of the partially non-technical-related components presents promising results, with the best result obtained for the safety component with a mean absolute error of 0.24, which represents a mean absolute percentage error of 5.76%. The analysis of feature important allowed us to determine which features are the more related to each ASSET component, and therefore determine the underlying impact of the sympathetic and parasympathetic nervous systems.</p><p><strong>Conclusion: </strong>In this preliminary work, a random forest regressor train on feature extract from heart-rate data could be used for automatic skill assessment and more especially for the partially non-technical-related components. Combined with more traditional data, such as kinematic data, it could help to perform accurate automatic skill assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"561-568"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142570327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fronto-orbital advancement with patient-specific 3D-printed implants and robot-guided laser osteotomy: an in vitro accuracy assessment. 患者特异性3d打印植入物和机器人引导激光截骨术的额眶推进:体外准确性评估。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-12-13 DOI: 10.1007/s11548-024-03298-6
Michaela Maintz, Nora Desan, Neha Sharma, Jörg Beinemann, Michel Beyer, Daniel Seiler, Philipp Honigmann, Jehuda Soleman, Raphael Guzman, Philippe C Cattin, Florian M Thieringer
{"title":"Fronto-orbital advancement with patient-specific 3D-printed implants and robot-guided laser osteotomy: an in vitro accuracy assessment.","authors":"Michaela Maintz, Nora Desan, Neha Sharma, Jörg Beinemann, Michel Beyer, Daniel Seiler, Philipp Honigmann, Jehuda Soleman, Raphael Guzman, Philippe C Cattin, Florian M Thieringer","doi":"10.1007/s11548-024-03298-6","DOIUrl":"10.1007/s11548-024-03298-6","url":null,"abstract":"<p><strong>Purpose: </strong>The use of computer-assisted virtual surgical planning (VSP) for craniosynostosis surgery is gaining increasing implementation in the clinics. However, accurately transferring the preoperative planning data to the operating room remains challenging. We introduced and investigated a fully digital workflow to perform fronto-orbital advancement (FOA) surgery using 3D-printed patient-specific implants (PSIs) and cold-ablation robot-guided laser osteotomy. This novel approach eliminates the need for traditional surgical templates while enhancing precision and customization, offering a more streamlined and efficient surgical process.</p><p><strong>Methods: </strong>Computed tomography data of a patient with craniosynostosis were used to digitally reconstruct the skull and to perform VSP of the FOA. In total, six PSIs per skull were 3D-printed with a medical-grade bioresorbable composite using the Arburg Plastic Freeforming technology. The planned osteotomy paths and the screw holes, including their positions and axis angles, were digitally transferred to the cold-ablation robot-guided osteotome interface. The osteotomies were performed on 3D-printed patient skull models. The implants, osteotomy and final FOA results were scanned and compared to the VSP data.</p><p><strong>Results: </strong>The osteotomy deviations for the skulls indicated an overall maximum distance of 1.7 mm, a median deviation of 0.44 mm, and a maximum root mean square (RMS) error of 0.67 mm. The deviation of the point-to-point surface comparison of the FOA with the VSP data resulted in a median accuracy of 1.27 mm. Accessing the orbital cavity with the laser remained challenging.</p><p><strong>Conclusion: </strong>This in vitro study showcases a novel FOA technique by effectively combining robot-guided laser osteotomy with 3D-printed patient-specific implants, eliminating the need for surgical templates and achieving high accuracy in bone cutting and positioning. The workflow holds promise for reducing preoperative planning time and increasing surgical efficiency. Further studies on bone tissue are required to validate the safety and effectiveness of this approach, especially in addressing the challenges of pediatric craniofacial surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"513-524"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration. 基于可变形软组织点云注册的半自动机器人穿刺系统。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-10-26 DOI: 10.1007/s11548-024-03247-3
Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie
{"title":"Semi-automatic robotic puncture system based on deformable soft tissue point cloud registration.","authors":"Bo Zhang, Kui Chen, Yuhang Yao, Bo Wu, Qiang Li, Zheming Zhang, Peihua Fan, Wei Wang, Manxia Lin, Masakatsu G Fujie","doi":"10.1007/s11548-024-03247-3","DOIUrl":"10.1007/s11548-024-03247-3","url":null,"abstract":"<p><strong>Purpose: </strong>Traditional surgical puncture robot systems based on computed tomography (CT) and infrared camera guidance have natural disadvantages for puncture of deformable soft tissues such as the liver. Liver movement and deformation caused by breathing are difficult to accurately assess and compensate by current technical solutions. We propose a semi-automatic robotic puncture system based on real-time ultrasound images to solve this problem.</p><p><strong>Method: </strong>Real-time ultrasound images and their spatial position information can be obtained by robot in this system. By recognizing target tissue in these ultrasound images and using reconstruction algorithm, 3D real-time ultrasound tissue point cloud can be constructed. Point cloud of the target tissue in the CT image can be obtained by using developed software. Through the point cloud registration method based on feature points, two point clouds above are registered. The puncture target will be automatically positioned, then robot quickly carries the puncture guide mechanism to the puncture site and guides the puncture. It takes about just tens of seconds from the start of image acquisition to completion of needle insertion. Patient can be controlled by a ventilator to temporarily stop breathing, and patient's breathing state does not need to be the same as taking CT scan.</p><p><strong>Results: </strong>The average operation time of 24 phantom experiments is 64.5 s, and the average error between the needle tip and the target point after puncture is 0.8 mm. Two animal puncture surgeries were performed, and the results indicated that the puncture errors of these two experiments are 1.76 mm and 1.81 mm, respectively.</p><p><strong>Conclusion: </strong>Robot system can effectively carry out and implement liver tissue puncture surgery, and the success rate of phantom experiments and experiments is 100%. It also shows that the puncture robot system has high puncture accuracy, short operation time, and great clinical value.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"525-534"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929701/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142512650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery. 用于计算机辅助眼科手术的显微图像立体重建。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-06-04 DOI: 10.1007/s11548-024-03177-0
Rebekka Peter, Sofia Moreira, Eleonora Tagliabue, Matthias Hillenbrand, Rita G Nunes, Franziska Mathis-Ullrich
{"title":"Stereo reconstruction from microscopic images for computer-assisted ophthalmic surgery.","authors":"Rebekka Peter, Sofia Moreira, Eleonora Tagliabue, Matthias Hillenbrand, Rita G Nunes, Franziska Mathis-Ullrich","doi":"10.1007/s11548-024-03177-0","DOIUrl":"10.1007/s11548-024-03177-0","url":null,"abstract":"<p><strong>Purpose: </strong>This work presents a novel platform for stereo reconstruction in anterior segment ophthalmic surgery to enable enhanced scene understanding, especially depth perception, for advanced computer-assisted eye surgery by effectively addressing the lack of texture and corneal distortions artifacts in the surgical scene.</p><p><strong>Methods: </strong>The proposed platform for stereo reconstruction uses a two-step approach: generating a sparse 3D point cloud from microscopic images, deriving a dense 3D representation by fitting surfaces onto the point cloud, and considering geometrical priors of the eye anatomy. We incorporate a pre-processing step to rectify distortion artifacts induced by the cornea's high refractive power, achieved by aligning a 3D phenotypical cornea geometry model to the images and computing a distortion map using ray tracing.</p><p><strong>Results: </strong>The accuracy of 3D reconstruction is evaluated on stereo microscopic images of ex vivo porcine eyes, rigid phantom eyes, and synthetic photo-realistic images. The results demonstrate the potential of the proposed platform to enhance scene understanding via an accurate 3D representation of the eye and enable the estimation of instrument to layer distances in porcine eyes with a mean average error of 190  <math><mrow><mi>μ</mi> <mtext>m</mtext></mrow> </math> , comparable to the scale of surgeons' hand tremor.</p><p><strong>Conclusion: </strong>This work marks a significant advancement in stereo reconstruction for ophthalmic surgery by addressing corneal distortions, a previously often overlooked aspect in such surgical scenarios. This could improve surgical outcomes by allowing for intra-operative computer assistance, e.g., in the form of virtual distance sensors.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"605-612"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929700/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141249094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual airways heatmaps to optimize point of entry location in lung biopsy planning systems. 虚拟气道热图优化肺活检计划系统的切入点位置。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-11-29 DOI: 10.1007/s11548-024-03292-y
Debora Gil, Pere Lloret, Marta Diez-Ferrer, Carles Sanchez
{"title":"Virtual airways heatmaps to optimize point of entry location in lung biopsy planning systems.","authors":"Debora Gil, Pere Lloret, Marta Diez-Ferrer, Carles Sanchez","doi":"10.1007/s11548-024-03292-y","DOIUrl":"10.1007/s11548-024-03292-y","url":null,"abstract":"<p><strong>Purpose: </strong>We present a virtual model to optimize point of entry (POE) in lung biopsy planning systems. Our model allows to compute the quality of a biopsy sample taken from potential POE, taking into account the margin of error that arises from discrepancies between the orientation in the planning simulation and the actual orientation during the operation. Additionally, the study examines the impact of the characteristics of the lesion.</p><p><strong>Methods: </strong>The quality of the biopsy is given by a heatmap projected onto the skeleton of a patient-specific model of airways. The skeleton provides a 3D representation of airways structure, while the heatmap intensity represents the potential amount of tissue that it could be extracted from each POE. This amount of tissue is determined by the intersection of the lesion with a cone that represents the uncertainty area in the introduction of biopsy instruments. The cone, lesion, and skeleton are modelled as graphical objects that define a 3D scene of the intervention.</p><p><strong>Results: </strong>We have simulated different settings of the intervention scene from a single anatomy extracted from a CT scan and two lesions with regular and irregular shapes. The different scenarios are simulated by systematic rotation of each lesion placed at different distances from airways. Analysis of the heatmaps for the different settings shows a strong impact of lesion orientation for irregular shape and the distance for both shapes.</p><p><strong>Conclusion: </strong>The proposed heatmaps help to visually assess the optimal POE and identify whether multiple optimal POEs exist in different zones of the bronchi. They also allow us to model the maximum allowable error in navigation systems and study which variables have the greatest influence on the success of the operation. Additionally, they help determine at what point this influence could potentially jeopardize the operation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"591-596"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive neighborhood triplet loss: enhanced segmentation of dermoscopy datasets by mining pixel information. 自适应邻域三重丢失:通过挖掘像素信息增强皮肤镜数据集的分割能力
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2024-08-02 DOI: 10.1007/s11548-024-03241-9
Mohan Xu, Lena Wiese
{"title":"Adaptive neighborhood triplet loss: enhanced segmentation of dermoscopy datasets by mining pixel information.","authors":"Mohan Xu, Lena Wiese","doi":"10.1007/s11548-024-03241-9","DOIUrl":"10.1007/s11548-024-03241-9","url":null,"abstract":"<p><strong>Purpose: </strong>The integration of deep learning in image segmentation technology markedly improves the automation capabilities of medical diagnostic systems, reducing the dependence on the clinical expertise of medical professionals. However, the accuracy of image segmentation is still impacted by various interference factors encountered during image acquisition.</p><p><strong>Methods: </strong>To address this challenge, this paper proposes a loss function designed to mine specific pixel information which dynamically changes during training process. Based on the triplet concept, this dynamic change is leveraged to drive the predicted boundaries of images closer to the real boundaries.</p><p><strong>Results: </strong>Extensive experiments on the PH2 and ISIC2017 dermoscopy datasets validate that our proposed loss function overcomes the limitations of traditional triplet loss methods in image segmentation applications. This loss function not only enhances Jaccard indices of neural networks by 2.42 <math><mo>%</mo></math> and 2.21 <math><mo>%</mo></math> for PH2 and ISIC2017, respectively, but also neural networks utilizing this loss function generally surpass those that do not in terms of segmentation performance.</p><p><strong>Conclusion: </strong>This work proposed a loss function that mined the information of specific pixels deeply without incurring additional training costs, significantly improving the automation of neural networks in image segmentation tasks. This loss function adapts to dermoscopic images of varying qualities and demonstrates higher effectiveness and robustness compared to other boundary loss functions, making it suitable for image segmentation tasks across various neural networks.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"453-463"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11929709/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141876608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3d freehand ultrasound reconstruction by reference-based point cloud registration. 基于参考点云配准的三维手绘超声重建。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-01 Epub Date: 2025-01-07 DOI: 10.1007/s11548-024-03280-2
Christoph Großbröhmer, Lasse Hansen, Jürgen Lichtenstein, Ludger Tüshaus, Mattias P Heinrich
{"title":"3d freehand ultrasound reconstruction by reference-based point cloud registration.","authors":"Christoph Großbröhmer, Lasse Hansen, Jürgen Lichtenstein, Ludger Tüshaus, Mattias P Heinrich","doi":"10.1007/s11548-024-03280-2","DOIUrl":"10.1007/s11548-024-03280-2","url":null,"abstract":"<p><strong>Purpose: </strong>This study aims to address the challenging estimation of trajectories from freehand ultrasound examinations by means of registration of automatically generated surface points. Current approaches to inter-sweep point cloud registration can be improved by incorporating heatmap predictions, but practical challenges such as label-sparsity or only partially overlapping coverage of target structures arise when applying realistic examination conditions.</p><p><strong>Methods: </strong>We propose a pipeline comprising three stages: (1) Utilizing a Free Point Transformer for coarse pre-registration, (2) Introducing HeatReg for further refinement using support point clouds, and (3) Employing instance optimization to enhance predicted displacements. Key techniques include expanding point sets with support points derived from prior knowledge and leverage of gradient keypoints. We evaluate our method on a large set of 42 forearm ultrasound sweeps with optical ground-truth tracking and investigate multiple ablations.</p><p><strong>Results: </strong>The proposed pipeline effectively registers free-hand intra-patient ultrasound sweeps. Combining Free Point Transformer with support-point enhanced HeatReg outperforms the FPT baseline by a mean directed surface distance of 0.96 mm (40%). Subsequent refinement using Adam instance optimization and DiVRoC further improves registration accuracy and trajectory estimation.</p><p><strong>Conclusion: </strong>The proposed techniques enable and improve the application of point cloud registration as a basis for freehand ultrasound reconstruction. Our results demonstrate significant theoretical and practical advantages of heatmap incorporation and multi-stage model predictions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"475-484"},"PeriodicalIF":2.3,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142958530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信