International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Neural illumination calibration for surgical workflow-optimized spectral imaging. 神经照明校准外科工作流程优化光谱成像。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-07 DOI: 10.1007/s11548-025-03525-8
Alexander Baumann, Leonardo Ayala, Alexander Studier-Fischer, Jan Sellner, Berkin Özdemir, Karl-Friedrich Kowalewski, Slobodan Ilic, Silvia Seidlitz, Lena Maier-Hein
{"title":"Neural illumination calibration for surgical workflow-optimized spectral imaging.","authors":"Alexander Baumann, Leonardo Ayala, Alexander Studier-Fischer, Jan Sellner, Berkin Özdemir, Karl-Friedrich Kowalewski, Slobodan Ilic, Silvia Seidlitz, Lena Maier-Hein","doi":"10.1007/s11548-025-03525-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03525-8","url":null,"abstract":"<p><strong>Purpose: </strong>Hyperspectral imaging (HSI) is emerging as a promising novel imaging modality with various potential surgical applications. Currently available cameras, however, suffer from poor integration into the clinical workflow because they require the lights to be switched off or the camera to be manually recalibrated as soon as lighting conditions change.</p><p><strong>Methods: </strong>We propose a novel learning-based approach to recalibration of hyperspectral cameras during surgery that predicts the corresponding white reference image from an uncalibrated hyperspectral input, enabling spatially resolved, automatic, and sterile calibration under varying illumination conditions. Our key novelty lies in (i) the disentanglement of the space of possible illuminations from the space of possible tissue configurations and (ii) combining real-world white reference measurements with physics-inspired simulated illuminations to create a diverse and representative training set.</p><p><strong>Results: </strong>Based on a total of 1,890 HSI cubes from a phantom, porcine subjects, rats, and humans, we derive the following key insights: Firstly, dynamically changing lighting conditions in the operating room dramatically reduce the performance of methods for physiological parameter estimation and surgical scene segmentation. Secondly, our method is not only sufficiently accurate to replace the tedious process of white reference-based recalibration, but also outperforms previously proposed methods by a large margin. Finally, our approach generalizes across species, lighting conditions, and image processing tasks.</p><p><strong>Conclusion: </strong>Our method enables seamless integration of hyperspectral imaging into surgical workflows by providing rapid and automated illumination calibration. Its robust generalization across diverse conditions significantly enhances the reliability and practicality of spectral imaging in clinical settings, paving the way for broader adoption of HSI in surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145240306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tooth segmentation and dental crowding diagnosis using two-stage dual-dilated graph convolution. 基于两阶段双扩张图卷积的牙齿分割与拥挤诊断。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-06 DOI: 10.1007/s11548-025-03526-7
Zongsong Han, Ning Dai, Zhilei Wu, Bin Yan, Luwei Liu, Bingting Shao
{"title":"Tooth segmentation and dental crowding diagnosis using two-stage dual-dilated graph convolution.","authors":"Zongsong Han, Ning Dai, Zhilei Wu, Bin Yan, Luwei Liu, Bingting Shao","doi":"10.1007/s11548-025-03526-7","DOIUrl":"https://doi.org/10.1007/s11548-025-03526-7","url":null,"abstract":"<p><strong>Purpose: </strong>Tooth segmentation and diagnosis of dental crowding severity on 3D intraoral scan models are key processes for computer-aided analysis of orthodontic models. Conventional methods are time-consuming, inefficient, and subjective, necessitating more efficient and intelligent approaches. Therefore, we propose a two-stage intelligent workflow.</p><p><strong>Methods: </strong>In Stage 1, tooth segmentation is performed using an innovative dual-dilated graph convolutional network 1 (DDGCNet1). In Stage 2, Stage 1's output is converted to a point cloud, then processed by DDGCNet2 and post-processing to generate arch length discrepancy (ALD, an indicator of dental crowding). The encoding layers of the proposed networks embed a novel dual-dilated EdgeConv module, effectively learning from local features and long-range contextual information of adjacent teeth.</p><p><strong>Results: </strong>Experimental comparative analysis demonstrates that the proposed network achieves outstanding segmentation performance and accurate dental crowding diagnosis. In ALD measurement, it attains a mean absolute error (MAE) of 1.553 mm for the maxilla and 1.434 mm for the mandible.</p><p><strong>Conclusion: </strong>This study can assist orthodontists in diagnosis and treatment, alleviate their workload, and expedite the development of reliable orthodontic treatment plans, thereby meeting the demands of computer-aided orthodontic diagnosis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fully automated segmentation of substantia nigra toward longitudinal analysis of Parkinson's disease. 面向帕金森病纵向分析的全自动黑质分割。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-06 DOI: 10.1007/s11548-025-03451-9
Tao Hu, Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Kei-Ichi Ishikawa, Wataru Sako, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori
{"title":"Fully automated segmentation of substantia nigra toward longitudinal analysis of Parkinson's disease.","authors":"Tao Hu, Hayato Itoh, Masahiro Oda, Shinji Saiki, Koji Kamagata, Kei-Ichi Ishikawa, Wataru Sako, Nobutaka Hattori, Shigeki Aoki, Kensaku Mori","doi":"10.1007/s11548-025-03451-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03451-9","url":null,"abstract":"<p><strong>Purpose: </strong>A fully automated segmentation of substantia nigra (SN) is an essential task for the development of an explainable computer-aided diagnosis system of Parkinson's disease (PD). Since anatomical alterations of SN are vital information in PD diagnosis, a precise segmentation model should have generalization ability against spatiotemporal changes. To satisfy these requirements, we propose a fully automated pipeline with several new techniques for a volumetric image obtained by neuromelanin magnetic resonance imaging.</p><p><strong>Methods: </strong>We develop a pipeline by integrating SN-prior probability estimation into the decision of the SN-contained region of interest. An estimated SN-prior probability is further fed into a new priority attention mechanism as a gating signal in our segmentation model. Furthermore, we introduce test-time dropout to improve a segmentation model's accuracy and generalization ability. To evaluate the model's generalization ability, we collected principal and external datasets with longitudinal scans of the same PD patients.</p><p><strong>Results: </strong>Our segmentation model achieved averaged Dice scores of 0.845 and 0.851 for SN hyperintense regions in the principal and external datasets, respectively. These results demonstrated the best generalization ability in our comparative evaluations. Thresholding the number of voxels in the SN hyperintense regions, we also evaluated the segmentation results in automated PD identification. The PD identification achieved the areas under the receiver operating characteristic curves of 0.755 and 0.726 by our pipeline's output and the ground truth, respectively.</p><p><strong>Conclusions: </strong>The proposed pipeline, where we integrated SN-prior probability estimation, priority attention mechanism and test-time dropout to our segmentation model, achieved accurate SN segmentation with high generalization ability for our longitudinal data: the principal and external datasets. As demonstrated in the validation with the automated PD identification, our pipeline has the potential for improving the performance of PD diagnosis via further large-scale longitudinal analysis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finite element simulation of guidewire navigation in venous transcatheter procedures. 静脉导管导丝导航的有限元模拟。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-06 DOI: 10.1007/s11548-025-03522-x
Kenza Oussalah, Richard Moreau, Arnaud Lelevé, Fabrice Morestin, Benyebka Bou-Saïd
{"title":"Finite element simulation of guidewire navigation in venous transcatheter procedures.","authors":"Kenza Oussalah, Richard Moreau, Arnaud Lelevé, Fabrice Morestin, Benyebka Bou-Saïd","doi":"10.1007/s11548-025-03522-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03522-x","url":null,"abstract":"<p><strong>Purpose: </strong>This paper introduces a Finite Element Method (FEM) to model the navigation of a surgical guidewire using a Transcatheter (TC) approach in the venous tree. The core objective is to characterize guidewire/vessel walls interactions, to predict reaction forces of the guidewire at the level of operator's grip zones and to correlate them with the model's kinematics.</p><p><strong>Methods: </strong>The analysis are performed following a dynamic implicit FEM simulation using Abaqus® (SIMULIA™). The venous geometry, from the femoral vein to the right atrium entry, is reconstructed from segmented preoperative CT-Scan data. A commercial super-stiff guidewire is modeled using beam elements with realistic incremental stiffness. To simulate real-life surgical insertion, a velocity-driven boundary condition is applied onto the distal end of the guidewire. Biomimetic material and interaction properties, along with external environmental influences and loads, enable high-fidelity computation.</p><p><strong>Results: </strong>Deformations remain minimal for venous walls tree while displacement of the guidewire are large. The maximum predicted reaction forces range from 0.5 to 1.4 N, depending on the geometric and kinematic insertion conditions of the guidewire. This magnitude is consistent with values reported in the literature for Minimally Invasive Surgeries. Results validate the applicability of the dynamic implicit FEM in predicting guidewire trajectory, interaction forces and reaction forces relevant to haptic feedback generation.</p><p><strong>Conclusion: </strong>This work lays the foundation for an image-based, mimetic FEM adapted for guidewire navigation's simulation. The proposed model offers an enhanced understanding of the mechanical behaviour underlying endovascular navigation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145233839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relevance of advanced imaging analysis units in radiology departments: a narrative review. 放射科先进影像分析单元的相关性:叙述性回顾。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-04 DOI: 10.1007/s11548-025-03529-4
Teodoro Martín-Noguerol, Félix Paulano-Godino, Pilar López-Úbeda, Roy F Riascos, Antonio Luna
{"title":"Relevance of advanced imaging analysis units in radiology departments: a narrative review.","authors":"Teodoro Martín-Noguerol, Félix Paulano-Godino, Pilar López-Úbeda, Roy F Riascos, Antonio Luna","doi":"10.1007/s11548-025-03529-4","DOIUrl":"https://doi.org/10.1007/s11548-025-03529-4","url":null,"abstract":"<p><strong>Purpose: </strong>Radiology departments (RDs) face an increasing volume of data, images, and information, leading to a higher workload for radiologists. The integration of artificial intelligence (AI) presents an opportunity to optimize workflows and reduce the burden on radiologists. This review explores the role of advanced imaging analysis units (AIAUs) in enhancing radiological processes and improving overall patient outcomes.</p><p><strong>Methods: </strong>A literature review was conducted to assess the impact of AI-driven AIAUs on RD workflows. The study examines the collaboration between radiologists, technicians, and biomedical engineers in the extraction and processing of imaging data. Additionally, the integration of AI algorithms for task automation is analyzed.</p><p><strong>Results: </strong>The implementation of AIAUs in RDs has the potential to enhance workflow efficiency by minimizing radiologists' workload and improving imaging analysis. These units facilitate collaborative work among radiologists, technicians, and engineers, fostering continuous communication, feedback, and training. AI algorithms incorporated into AIAUs support automation, streamlining pre- and postprocessing imaging tasks.</p><p><strong>Conclusion: </strong>AIAUs represent a promising approach to optimizing RD workflows and improving patient outcomes. Their successful implementation requires a multidisciplinary approach, integrating AI technologies with the expertise of radiologists, technicians, and biomedical engineers. Continuous collaboration and education within these units will be essential to maximize the benefits of emerging digital technologies in radiology.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145226197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing modified barium swallow pre-sorting with deep learning: a new paradigm for the first step analysis in X-ray swallowing study. 利用深度学习推进改良的钡吞咽预分选:x射线吞咽研究第一步分析的新范式。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-04 DOI: 10.1007/s11548-025-03505-y
Shitong Mao, Mohamed A Naser, Sheila Buoy, Kristy K Brock, Katherine A Hutcheson
{"title":"Advancing modified barium swallow pre-sorting with deep learning: a new paradigm for the first step analysis in X-ray swallowing study.","authors":"Shitong Mao, Mohamed A Naser, Sheila Buoy, Kristy K Brock, Katherine A Hutcheson","doi":"10.1007/s11548-025-03505-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03505-y","url":null,"abstract":"<p><strong>Purpose: </strong>Modified barium swallow (MBS) exams are pivotal for assessing swallowing function and include diagnostic video segments imaged in various planes, such as anteroposterior (AP or coronal plane) and lateral (or mid-sagittal plane), alongside non-diagnostic 'scout' image segments used for anatomic reference and image set-up that do not include bolus swallows. These variations in imaging files necessitate manual sorting and labeling, complicating the pre-analysis workflow.</p><p><strong>Methods: </strong>Our study introduces a deep learning approach to automate the categorization of swallow videos in MBS exams, distinguishing between the different types of diagnostic videos and identifying non-diagnostic scout videos to streamline the MBS review workflow. Our algorithms were developed on a dataset that included 3,740 video segments with a total of 986,808 frames from 285 MBS exams in 216 patients (average age 60 ± 9).</p><p><strong>Results: </strong>Our model achieved an accuracy of 99.68% at the frame level and 100% at the video level in differentiating AP from lateral planes. For distinguishing scout from bolus swallowing videos, the model reached an accuracy of 90.26% at the frame level and 93.86% at the video level. Incorporating a multi-task learning approach notably enhanced the video-level accuracy to 96.35% for scout/bolus video differentiation.</p><p><strong>Conclusion: </strong>Our analysis highlighted the importance of leveraging inter-frame connectivity for improving the model performance. These findings significantly boost MBS exam processing efficiency, minimizing manual sorting efforts and allowing raters to allocate greater focus to clinical interpretation and patient care.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145228616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
What are you looking at? Modality contribution in multimodal medical deep learning. 你在看什么?模态对多模态医学深度学习的贡献。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-10-02 DOI: 10.1007/s11548-025-03523-w
Christian Gapp, Elias Tappeiner, Martin Welk, Karl Fritscher, Elke R Gizewski, Rainer Schubert
{"title":"What are you looking at? Modality contribution in multimodal medical deep learning.","authors":"Christian Gapp, Elias Tappeiner, Martin Welk, Karl Fritscher, Elke R Gizewski, Rainer Schubert","doi":"10.1007/s11548-025-03523-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03523-w","url":null,"abstract":"<p><strong>Purpose: </strong>High dimensional, multimodal data can nowadays be analyzed by huge deep neural networks with little effort. Several fusion methods for bringing together different modalities have been developed. Given the prevalence of high-dimensional, multimodal patient data in medicine, the development of multimodal models marks a significant advancement. However, how these models process information from individual sources in detail is still underexplored.</p><p><strong>Methods: </strong>To this end, we implemented an occlusion-based modality contribution method that is both model- and performance agnostic. This method quantitatively measures the importance of each modality in the dataset for the model to fulfill its task. We applied our method to three different multimodal medical problems for experimental purposes.</p><p><strong>Results: </strong>Herein we found that some networks have modality preferences that tend to unimodal collapses, while some datasets are imbalanced from the ground up. Moreover, we provide fine-grained quantitative and visual attribute importance for each modality.</p><p><strong>Conclusion: </strong>Our metric offers valuable insights that can support the advancement of multimodal model development and dataset creation. By introducing this method, we contribute to the growing field of interpretability in deep learning for multimodal research. This approach helps to facilitate the integration of multimodal AI into clinical practice. Our code is publicly available at https://github.com/ChristianGappGit/MC_MMD.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145208231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based segmentation of acute pulmonary embolism in cardiac CT images. 基于深度学习的心脏急性肺栓塞CT图像分割。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-25 DOI: 10.1007/s11548-025-03503-0
Ehsan Amini, Georg Hille, Janine Hürtgen, Alexey Surov, Sylvia Saalfeld
{"title":"Deep learning-based segmentation of acute pulmonary embolism in cardiac CT images.","authors":"Ehsan Amini, Georg Hille, Janine Hürtgen, Alexey Surov, Sylvia Saalfeld","doi":"10.1007/s11548-025-03503-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03503-0","url":null,"abstract":"<p><strong>Purpose: </strong>Acute pulmonary embolism (APE) is a common pulmonary condition that, in severe cases, can progress to right ventricular hypertrophy and failure, making it a critical health concern surpassed in severity only by myocardial infarction and sudden death. CT pulmonary angiogram (CTPA) is a standard diagnostic tool for detecting APE. However, for treatment planning and prognosis of patient outcome, an accurate assessment of individual APEs is required.</p><p><strong>Methods: </strong>Within this study, we compiled and prepared a dataset of 200 CTPA image volumes of patients with APE. We then adapted two state-of-the-art neural networks; the nnU-Net and the transformer-based VT-UNet in order to provide fully automatic APE segmentations.</p><p><strong>Results: </strong>The nnU-Net demonstrated robust performance, achieving an average Dice similarity coefficient (DSC) of 88.25 ± 10.19% and an average 95th percentile Hausdorff distance (HD95) of 10.57 ± 34.56 mm across the validation sets in a five-fold cross-validation framework. In comparison, the VT-UNet was achieving on par accuracies with an average DSC of 87.90 ± 10.94% and a mean HD95 of 10.77 ± 34.19 mm.</p><p><strong>Conclusions: </strong>We applied two state-of-the-art networks for automatic APE segmentation to our compiled CTPA dataset and achieved superior experimental results compared to the current state of the art. In clinical routine, accurate APE segmentations can be used for enhanced patient prognosis and treatment planning.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145139470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classification of esophageal cancer by using hyperspectral data. 食管癌的高光谱分类。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-23 DOI: 10.1007/s11548-025-03514-x
Marianne Maktabi, Claudia Hain, Hannes Köhler, Benjamin Huber, René Thieme, Katrin Schierle, Boris Jansen-Winkeln, Ines Gockel
{"title":"Classification of esophageal cancer by using hyperspectral data.","authors":"Marianne Maktabi, Claudia Hain, Hannes Köhler, Benjamin Huber, René Thieme, Katrin Schierle, Boris Jansen-Winkeln, Ines Gockel","doi":"10.1007/s11548-025-03514-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03514-x","url":null,"abstract":"<p><strong>Purpose: </strong>Esophageal cancer is widespread worldwide, with the highest rate in Asia. Early diagnosis plays a key role in increasing the survival rate. Early cancer detection as well as fast evaluation of tumor extent before and resection margins during/after surgery are important to improve patients' outcomes. Hyperspectral imaging (HSI), as a noninvasive and contactless novel intraoperative technique, has shown promising results in cancer detecting in combination with artificial intelligence.</p><p><strong>Methods: </strong>In this clinical study, the extent to which physiological parameters, such as water or hemoglobin content, differ in the esophagus, stomach, and cancer tissue, was examined. For this purpose, hyperspectral intraluminal recordings of affected tissue specimen were carried out. In addition, a classification of the three intraluminal tissue types (esophageal, stomach mucosa, and cancerous tissue) was performed by using two different convolutional neural networks.</p><p><strong>Results: </strong>Our analysis clearly demonstrated differences in hemoglobin concentration and water content between healthy and cancerous tissues, as well as among different tumor stages. As classification results, an averaged area under the curve score of 81 ± 3%, a sensitivity of 74 ± 8%, and a specificity of 89 ± 2% could be achieved across all tissue types using a hybrid convolutional neural network.</p><p><strong>Conclusion: </strong>HSI has relevant potential for supporting the detection of tumorous tissue in esophageal cancer. However, further analyses including more detailed histopathologic correlation as \"gold standard\" are needed. Data augmentation and future multicenter studies have to be carried out. These steps may help to improve and sharpen our current findings, especially for esophageal cancerous tissue.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Liver mask-guided SAM-enhanced dual-decoder network for landmark segmentation in AR-guided surgery. 肝口罩引导的sam增强双解码器网络在ar引导手术中的地标分割。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-23 DOI: 10.1007/s11548-025-03516-9
Xukun Zhang, Sharib Ali, Yanlan Kang, Jingyi Zhu, Minghao Han, Le Wang, Xiaoying Wang, Lihua Zhang
{"title":"Liver mask-guided SAM-enhanced dual-decoder network for landmark segmentation in AR-guided surgery.","authors":"Xukun Zhang, Sharib Ali, Yanlan Kang, Jingyi Zhu, Minghao Han, Le Wang, Xiaoying Wang, Lihua Zhang","doi":"10.1007/s11548-025-03516-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03516-9","url":null,"abstract":"<p><strong>Purpose: </strong>In augmented reality (AR)-guided laparoscopic liver surgery, accurate segmentation of liver landmarks is crucial for precise 3D-2D registration. However, existing methods struggle with complex structures, limited data, and class imbalance. In this study, we propose a novel approach to improve landmark segmentation performance by leveraging liver mask prediction.</p><p><strong>Methods: </strong>We propose a dual-decoder model enhanced by a pre-trained segment anything model (SAM) encoder, where one decoder segments the liver and the other focuses on liver landmarks. The SAM encoder provides robust features for liver mask prediction, improving generalizability. A liver-guided consistency constraint establishes fine-grained spatial consistency between liver regions and landmarks, enhancing segmentation accuracy through detailed spatial modeling.</p><p><strong>Results: </strong>The proposed method achieved state-of-the-art performance in liver landmark segmentation on two public laparoscopic datasets. By addressing feature entanglement, the dual-decoder framework with SAM and consistency constraints significantly improved segmentation in complex surgical scenarios.</p><p><strong>Conclusion: </strong>The SAM-enhanced dual-decoder network, incorporating liver-guided consistency constraints, offers a promising solution for 2D landmark segmentation in AR-guided laparoscopic surgery. By mutually reinforcing liver mask and landmark segmentation, the method achieves improved accuracy and robustness for intraoperative applications.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145126548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信