International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
A multi-model deep learning approach for the identification of coronary artery calcifications within 2D coronary angiography images. 二维冠状动脉造影图像中冠状动脉钙化的多模型深度学习识别方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-05-08 DOI: 10.1007/s11548-025-03382-5
Edoardo De Rose, Ciro Benito Raggio, Ahmad Riccardo Rasheed, Pierangela Bruno, Paolo Zaffino, Salvatore De Rosa, Francesco Calimeri, Maria Francesca Spadea
{"title":"A multi-model deep learning approach for the identification of coronary artery calcifications within 2D coronary angiography images.","authors":"Edoardo De Rose, Ciro Benito Raggio, Ahmad Riccardo Rasheed, Pierangela Bruno, Paolo Zaffino, Salvatore De Rosa, Francesco Calimeri, Maria Francesca Spadea","doi":"10.1007/s11548-025-03382-5","DOIUrl":"10.1007/s11548-025-03382-5","url":null,"abstract":"<p><strong>Purpose: </strong>Identifying and quantifying coronary artery calcification (CAC) is crucial for preoperative planning, as it helps to estimate both the complexity of the 2D coronary angiography (2DCA) procedure and the risk of developing intraoperative complications. Despite the relevance, the actual practice relies upon visual inspection of the 2DCA image frames by clinicians. This procedure is prone to inaccuracies due to the poor contrast and small size of the CAC; moreover, it is dependent on the physician's experience. To address this issue, we developed a workflow to assist clinicians in identifying CAC within 2DCA using data from 44 image acquisitions across 14 patients.</p><p><strong>Methods: </strong>Our workflow consists of three stages. In the first stage, a classification backbone based on ResNet-18 is applied to guide the CAC identification by extracting relevant features from 2DCA frames. In the second stage, a U-Net decoder architecture, mirroring the encoding structure of the ResNet-18, is employed to identify the regions of interest (ROI) of the CAC. Eventually, a post-processing step refines the results to obtain the final ROI. The workflow was evaluated using a leave-out cross-validation.</p><p><strong>Results: </strong>The proposed method outperformed the comparative methods by achieving an F1-score for the classification step of 0.87 (0.77 <math><mo>-</mo></math> 0.94) (median ± quartiles), while for the CAC identification step the intersection over minimum (IoM) was 0.64 (0.46 <math><mo>-</mo></math> 0.86) (median ± quartiles).</p><p><strong>Conclusion: </strong>This is the first attempt to propose a clinical decision support system to assist the identification of CAC within 2DCA. The proposed workflow holds the potential to improve both the accuracy and efficiency of CAC quantification, with promising clinical applications. As future work, the concurrent use of multiple auxiliary tasks could be explored to further improve the segmentation performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1273-1281"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167256/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic ultrasound image alignment for diagnosis of pediatric distal forearm fractures. 自动超声图像对齐诊断小儿前臂远端骨折。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-05-02 DOI: 10.1007/s11548-025-03361-w
Peng Liu, Yujia Hu, Jurek Schultz, Jinjing Xu, Christoph von Schrottenberg, Philipp Schwerk, Josephine Pohl, Guido Fitze, Stefanie Speidel, Micha Pfeiffer
{"title":"Automatic ultrasound image alignment for diagnosis of pediatric distal forearm fractures.","authors":"Peng Liu, Yujia Hu, Jurek Schultz, Jinjing Xu, Christoph von Schrottenberg, Philipp Schwerk, Josephine Pohl, Guido Fitze, Stefanie Speidel, Micha Pfeiffer","doi":"10.1007/s11548-025-03361-w","DOIUrl":"10.1007/s11548-025-03361-w","url":null,"abstract":"<p><strong>Purpose: </strong>The study aims to develop an automatic method to align ultrasound images of the distal forearm for diagnosing pediatric fractures. This approach seeks to bypass the reliance on X-rays for fracture diagnosis, thereby minimizing radiation exposure and making the process less painful, as well as creating a more child-friendly diagnostic pathway.</p><p><strong>Methods: </strong>We present a fully automatic pipeline to align paired POCUS images. We first leverage a deep learning model to delineate bone boundaries, from which we obtain key anatomical landmarks. These landmarks are finally used to guide the optimization-based alignment process, for which we propose three optimization constraints: aligning specific points, ensuring parallel orientation of the bone segments, and matching the bone widths.</p><p><strong>Results: </strong>The method demonstrated high alignment accuracy compared to reference X-rays in terms of boundary distances. A morphology experiment including fracture classification and angulation measurement presents comparable performance when based on the merged ultrasound images and conventional X-rays, justifying the effectiveness of our method in these cases.</p><p><strong>Conclusions: </strong>The study introduced an effective and fully automatic pipeline for aligning ultrasound images, showing potential to replace X-rays for diagnosing pediatric distal forearm fractures. Initial tests show that surgeons find many of our results sufficient for diagnosis. Future work will focus on increasing dataset size to improve diagnostic accuracy and reliability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1249-1254"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167337/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heat: high-efficiency simulation for thermal ablation therapy. 热:高效模拟热消融治疗。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-10 DOI: 10.1007/s11548-025-03350-z
Jonas Mehtali, Juan Verde, Caroline Essert
{"title":"Heat: high-efficiency simulation for thermal ablation therapy.","authors":"Jonas Mehtali, Juan Verde, Caroline Essert","doi":"10.1007/s11548-025-03350-z","DOIUrl":"10.1007/s11548-025-03350-z","url":null,"abstract":"<p><strong>Purpose: </strong>Percutaneous thermal ablation is increasingly popular but still suffers from a complex preoperative planning, especially with multiple needles. Existing planning methods either use theoretical ablation shapes for faster estimates or are computationally intensive when incorporating realistic thermal propagation. This paper introduces a multi-resolution approach that accelerates thermal propagation simulation, enabling users to adjust ablation parameters and see the results in interactive time.</p><p><strong>Methods: </strong>For static needle positions, a high-resolution simulation based on GPU-accelerated implementation of the Pennes bioheat equation is used. During user interaction, intermediate frames display a lower-resolution estimation of the ablated volume. Two methods are compared, based on GPU-accelerated reimplementations of finite difference and lattice Boltzmann approaches. A parameter study was conducted to identify the optimal balance between speed and accuracy for the low- and high-resolution frames. The chosen parameters are finally tested in multi-needle scenarios to validate the interactive capability in this context.</p><p><strong>Results: </strong>Tested with percutaneous radiofrequency data, our multi-resolution method significantly reduces computation time while maintaining good accuracy compared to the reference simulation. For high-resolution frames, we can reach up to 5.8 fps, while for intermediate low-resolution frames we can reach a frame rate of 32 fps with less than 20% loss of accuracy.</p><p><strong>Conclusion: </strong>This multi-resolution approach allows for smooth interaction with multiple needles, with instant visualization of the predicted ablation volume, in the context of percutaneous radiofrequency treatments. It could also be applied to automated planning, reducing the time required for iterative adjustments.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1135-1143"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic CBCT meets robotic ultrasound. 机器人CBCT与机器人超声。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-03-12 DOI: 10.1007/s11548-025-03336-x
Feng Li, Yuan Bi, Dianye Huang, Zhongliang Jiang, Nassir Navab
{"title":"Robotic CBCT meets robotic ultrasound.","authors":"Feng Li, Yuan Bi, Dianye Huang, Zhongliang Jiang, Nassir Navab","doi":"10.1007/s11548-025-03336-x","DOIUrl":"10.1007/s11548-025-03336-x","url":null,"abstract":"<p><strong>Purpose: </strong>The multi-modality imaging system offers optimal fused images for safe and precise interventions in modern clinical practices, such as computed tomography-ultrasound (CT-US) guidance for needle insertion. However, the limited dexterity and mobility of current imaging devices hinder their integration into standardized workflows and the advancement toward fully autonomous intervention systems. In this paper, we present a novel clinical setup where robotic cone beam computed tomography (CBCT) and robotic US are pre-calibrated and dynamically co-registered, enabling new clinical applications. This setup allows registration-free rigid registration, facilitating multi-modal guided procedures in the absence of tissue deformation.</p><p><strong>Methods: </strong>First, a one-time pre-calibration is performed between the systems. To ensure a safe insertion path by highlighting critical vasculature on the 3D CBCT, SAM2 segments vessels from B-mode images, using the Doppler signal as an autonomously generated prompt. Based on the registration, the Doppler image or segmented vessel masks are then mapped onto the CBCT, creating an optimally fused image with comprehensive detail. To validate the system, we used a specially designed phantom, featuring lesions covered by ribs and multiple vessels with simulated moving flow.</p><p><strong>Results: </strong>The mapping error between US and CBCT resulted in an average deviation of <math><mrow><mn>1.72</mn> <mo>±</mo> <mn>0.62</mn></mrow> </math> mm. A user study demonstrated the effectiveness of CBCT-US fusion for needle insertion guidance, showing significant improvements in time efficiency, accuracy, and success rate. Needle intervention performance improved by approximately 50% compared to the conventional US-guided workflow.</p><p><strong>Conclusion: </strong>We present the first robotic dual-modality imaging system designed to guide clinical applications. The results show significant performance improvements compared to traditional manual interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1049-1057"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167334/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric-MAA: fast, object-centric avoidance of metal artifacts for intraoperative CBCT. 参数化maa:术中CBCT快速、以物体为中心避免金属伪影。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-05 DOI: 10.1007/s11548-025-03348-7
Maximilian Rohleder, Andreas Maier, Bjoern Kreher
{"title":"Parametric-MAA: fast, object-centric avoidance of metal artifacts for intraoperative CBCT.","authors":"Maximilian Rohleder, Andreas Maier, Bjoern Kreher","doi":"10.1007/s11548-025-03348-7","DOIUrl":"10.1007/s11548-025-03348-7","url":null,"abstract":"<p><strong>Purpose: </strong>Metal artifacts remain a persistent issue in intraoperative CBCT imaging. Particularly in orthopedic and trauma applications, these artifacts obstruct clinically relevant areas around the implant, reducing the modality's clinical value. Metal artifact avoidance (MAA) methods have shown potential to improve image quality through trajectory adjustments, but often fail in clinical practice due to their focus on irrelevant objects and high computational demands. To address these limitations, we introduce the novel parametric metal artifact avoidance (P-MAA) method.</p><p><strong>Methods: </strong>The P-MAA method first detects keypoints in two scout views using a deep learning model. These keypoints are used to model each clinically relevant object as an ellipsoid, capturing its position, extent, and orientation. We hypothesize that fine details of object shapes are less critical for artifact reduction. Based on these ellipsoidal representations, we devise a computationally efficient metric for scoring view trajectories, enabling fast, CPU-based optimization. A detection model for object localization was trained using both simulated and real data and validated on real clinical cases. The scoring method was benchmarked against a raytracing-based approach.</p><p><strong>Results: </strong>The trained detection model achieved a mean average recall of 0.78, demonstrating generalizability to unseen clinical cases. The ellipsoid-based scoring method closely approximated results using raytracing and was effective in complex clinical scenarios. Additionally, the ellipsoid method provided a 33-fold increase in speed, without the need for GPU acceleration.</p><p><strong>Conclusion: </strong>The P-MAA approach provides a feasible solution for metal artifact avoidance in CBCT imaging, enabling fast trajectory optimization while focusing on clinically relevant objects. This method represents a significant step toward practical intraoperative implementation of MAA techniques.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1115-1124"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An augmented reality overlay for navigated prostatectomy using fiducial-free 2D-3D registration. 增强现实覆盖导航前列腺切除术使用无基准2D-3D注册。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-05-08 DOI: 10.1007/s11548-025-03374-5
Johannes Bender, Jeremy Kwe, Benedikt Hoeh, Katharina Boehm, Ivan Platzek, Angelika Borkowetz, Stefanie Speidel, Micha Pfeiffer
{"title":"An augmented reality overlay for navigated prostatectomy using fiducial-free 2D-3D registration.","authors":"Johannes Bender, Jeremy Kwe, Benedikt Hoeh, Katharina Boehm, Ivan Platzek, Angelika Borkowetz, Stefanie Speidel, Micha Pfeiffer","doi":"10.1007/s11548-025-03374-5","DOIUrl":"10.1007/s11548-025-03374-5","url":null,"abstract":"<p><strong>Purpose: </strong>Markerless navigation in minimally invasive surgery is still an unsolved challenge. Many proposed navigation systems for minimally invasive surgeries rely on stereoscopic images, while in clinical practice oftentimes monocular endoscopes are used. Combined with the lack of automatic video-based navigation systems for prostatectomies, this paper explores methods to tackle both research gaps at the same time for robot-assisted prostatectomies.</p><p><strong>Methods: </strong>In order to realize a semi-automatic augmented reality overlay for navigated prostatectomy, the camera pose w.r.t. the prostate needs to be estimated. We developed a method where visual cues are drawn on top of the organ after an initial manual alignment, simultaneously creating matching landmarks on the 2D and 3D data. Starting from this key frame, the cues are then tracked in the endoscopic video. Both PnPRansac and differentiable rendering are then explored to perform 2D-3D registration for each frame.</p><p><strong>Results: </strong>We performed experiments on synthetic and in vivo data. On synthetic data differentiable rendering can achieve a median target registration error of 6.11 mm. Both PnPRansac and differentiable rendering are feasible methods for 2D-3D registration.</p><p><strong>Conclusion: </strong>We demonstrated a video-based markerless augmented reality overlay for navigated prostatectomy, using visual cues as an anchor.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1265-1272"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167248/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143990622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FACT: foundation model for assessing cancer tissue margins with mass spectrometry. FACT:利用质谱技术评估癌症组织边缘的基础模型。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-04 DOI: 10.1007/s11548-025-03355-8
Mohammad Farahmand, Amoon Jamzad, Fahimeh Fooladgar, Laura Connolly, Martin Kaufmann, Kevin Yi Mi Ren, John Rudan, Doug McKay, Gabor Fichtinger, Parvin Mousavi
{"title":"FACT: foundation model for assessing cancer tissue margins with mass spectrometry.","authors":"Mohammad Farahmand, Amoon Jamzad, Fahimeh Fooladgar, Laura Connolly, Martin Kaufmann, Kevin Yi Mi Ren, John Rudan, Doug McKay, Gabor Fichtinger, Parvin Mousavi","doi":"10.1007/s11548-025-03355-8","DOIUrl":"10.1007/s11548-025-03355-8","url":null,"abstract":"<p><strong>Purpose: </strong>Accurately classifying tissue margins during cancer surgeries is crucial for ensuring complete tumor removal. Rapid Evaporative Ionization Mass Spectrometry (REIMS), a tool for real-time intraoperative margin assessment, generates spectra that require machine learning models to support clinical decision-making. However, the scarcity of labeled data in surgical contexts presents a significant challenge. This study is the first to develop a foundation model tailored specifically for REIMS data, addressing this limitation and advancing real-time surgical margin assessment.</p><p><strong>Methods: </strong>We propose FACT, a Foundation model for Assessing Cancer Tissue margins. FACT is an adaptation of a foundation model originally designed for text-audio association, pretrained using our proposed supervised contrastive approach based on triplet loss. An ablation study is performed to compare our proposed model against other models and pretraining methods.</p><p><strong>Results: </strong>Our proposed model significantly improves the classification performance, achieving state-of-the-art performance with an AUROC of <math><mrow><mn>82.4</mn> <mo>%</mo> <mo>±</mo> <mn>0.8</mn></mrow> </math> . The results demonstrate the advantage of our proposed pretraining method and selected backbone over the self-supervised and semi-supervised baselines and alternative models.</p><p><strong>Conclusion: </strong>Our findings demonstrate that foundation models, adapted and pretrained using our novel approach, can effectively classify REIMS data even with limited labeled examples. This highlights the viability of foundation models for enhancing real-time surgical margin assessment, particularly in data-scarce clinical environments.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1097-1104"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Early operative difficulty assessment in laparoscopic cholecystectomy via snapshot-centric video analysis. 基于快照中心视频分析的腹腔镜胆囊切除术早期手术难度评估。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-21 DOI: 10.1007/s11548-025-03372-7
Saurav Sharma, Maria Vannucci, Leonardo Pestana Legori, Mario Scaglia, Giovanni Guglielmo Laracca, Didier Mutter, Sergio Alfieri, Pietro Mascagni, Nicolas Padoy
{"title":"Early operative difficulty assessment in laparoscopic cholecystectomy via snapshot-centric video analysis.","authors":"Saurav Sharma, Maria Vannucci, Leonardo Pestana Legori, Mario Scaglia, Giovanni Guglielmo Laracca, Didier Mutter, Sergio Alfieri, Pietro Mascagni, Nicolas Padoy","doi":"10.1007/s11548-025-03372-7","DOIUrl":"10.1007/s11548-025-03372-7","url":null,"abstract":"<p><strong>Purpose: </strong>Laparoscopic cholecystectomy (LC) operative difficulty (LCOD) is highly variable and influences outcomes. Despite extensive LC studies in surgical workflow analysis, limited efforts explore LCOD using intraoperative video data. Early recognition of LCOD could allow prompt review by expert surgeons, enhance operating room (OR) planning, and improve surgical outcomes.</p><p><strong>Methods: </strong>We propose the clinical task of early LCOD assessment using limited video observations. We design SurgPrOD, a deep learning model to assess LCOD by analyzing features from global and local temporal resolutions (snapshots) of the observed LC video. Also, we propose a novel snapshot-centric attention (SCA) module, acting across snapshots, to enhance LCOD prediction. We introduce the CholeScore dataset, featuring video-level LCOD labels to validate our method.</p><p><strong>Results: </strong>We evaluate SurgPrOD on 3 LCOD assessment scales in the CholeScore dataset. On our new metric assessing early and stable correct predictions, SurgPrOD surpasses baselines by at least 0.22 points. SurgPrOD improves over baselines by at least 9 and 5 percentage points in F1 score and top1-accuracy, respectively, demonstrating its effectiveness in correct predictions.</p><p><strong>Conclusion: </strong>We propose a new task for early LCOD assessment and a novel model, SurgPrOD, analyzing surgical video from global and local perspectives. Our results on the CholeScore dataset establish a new benchmark to study LCOD using intraoperative video data.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1185-1193"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IJCARS: IPCAI 2025 special issue-16th international conference on information processing in computer-assisted interventions 2025. IJCARS: IPCAI 2025特刊-第16届计算机辅助干预信息处理国际会议2025。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-05-24 DOI: 10.1007/s11548-025-03435-9
Sophia Bano, Sara Moccia, Anirban Mukhopadhyay
{"title":"IJCARS: IPCAI 2025 special issue-16th international conference on information processing in computer-assisted interventions 2025.","authors":"Sophia Bano, Sara Moccia, Anirban Mukhopadhyay","doi":"10.1007/s11548-025-03435-9","DOIUrl":"10.1007/s11548-025-03435-9","url":null,"abstract":"","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1047-1048"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144136414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SHADeS: self-supervised monocular depth estimation through non-Lambertian image decomposition. 阴影:通过非朗伯图像分解的自监督单眼深度估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-05-13 DOI: 10.1007/s11548-025-03371-8
Rema Daher, Francisco Vasconcelos, Danail Stoyanov
{"title":"SHADeS: self-supervised monocular depth estimation through non-Lambertian image decomposition.","authors":"Rema Daher, Francisco Vasconcelos, Danail Stoyanov","doi":"10.1007/s11548-025-03371-8","DOIUrl":"10.1007/s11548-025-03371-8","url":null,"abstract":"<p><strong>Purpose: </strong>Visual 3D scene reconstruction can support colonoscopy navigation. It can help in recognising which portions of the colon have been visualised and characterising the size and shape of polyps. This is still a very challenging problem due to complex illumination variations, including abundant specular reflections. We investigate how to effectively decouple light and depth in this problem.</p><p><strong>Methods: </strong>We introduce a self-supervised model that simultaneously characterises the shape and lighting of the visualised colonoscopy scene. Our model estimates shading, albedo, depth, and specularities (SHADeS) from single images. Unlike previous approaches (IID (Li et al. IEEE J Biomed Health Inform https://doi.org/10.1109/JBHI.2024.3400804 , 2024)), we use a non-Lambertian model that treats specular reflections as a separate light component. The implementation of our method is available at https://github.com/RemaDaher/SHADeS .</p><p><strong>Results: </strong>We demonstrate on real colonoscopy images (Hyper Kvasir) that previous models for light decomposition (IID) and depth estimation (MonoViT, ModoDepth2) are negatively affected by specularities. In contrast, SHADeS can simultaneously produce light decomposition and depth maps that are robust to specular regions. We also perform a quantitative comparison on phantom data (C3VD) where we further demonstrate the robustness of our model.</p><p><strong>Conclusion: </strong>Modelling specular reflections improves depth estimation in colonoscopy. We propose an effective self-supervised approach that uses this insight to jointly estimate light decomposition and depth. Light decomposition has the potential to help with other problems, such as place recognition within the colon.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1255-1263"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167237/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144020482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信