International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
NICE polyp feature classification for colonoscopy screening.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-13 DOI: 10.1007/s11548-025-03338-9
Thomas De Carvalho, Rawen Kader, Patrick Brandao, Laurence B Lovat, Peter Mountney, Danail Stoyanov
{"title":"NICE polyp feature classification for colonoscopy screening.","authors":"Thomas De Carvalho, Rawen Kader, Patrick Brandao, Laurence B Lovat, Peter Mountney, Danail Stoyanov","doi":"10.1007/s11548-025-03338-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03338-9","url":null,"abstract":"<p><strong>Purpose: </strong>Colorectal cancer is one of the most prevalent cancers worldwide, highlighting the critical need for early and accurate diagnosis to reduce patient risks. Inaccurate diagnoses not only compromise patient outcomes but also lead to increased costs and additional time burdens for clinicians. Enhancing diagnostic accuracy is essential, and this study focuses on improving the accuracy of polyp classification using the NICE classification, which evaluates three key features: colour, vessels, and surface pattern.</p><p><strong>Methods: </strong>A multiclass classifier was developed and trained to independently classify each of the three features in the NICE classification. The approach prioritizes clinically relevant features rather than relying on handcrafted or obscure deep learning features, ensuring transparency and reliability for clinical use. The classifier was trained on internal datasets and tested on both internal and public datasets.</p><p><strong>Results: </strong>The classifier successfully classified the three polyp features, achieving an accuracy of over 92% on internal datasets and exceeding 88% on a public dataset. The high classification accuracy demonstrates the system's effectiveness in identifying the key features from the NICE classification.</p><p><strong>Conclusion: </strong>This study underscores the potential of using an independent classification approach for NICE features to enhance clinical decision-making in colorectal cancer diagnosis. The method shows promise in improving diagnostic accuracy, which could lead to better patient outcomes and more efficient clinical workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robotic CBCT meets robotic ultrasound.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-12 DOI: 10.1007/s11548-025-03336-x
Feng Li, Yuan Bi, Dianye Huang, Zhongliang Jiang, Nassir Navab
{"title":"Robotic CBCT meets robotic ultrasound.","authors":"Feng Li, Yuan Bi, Dianye Huang, Zhongliang Jiang, Nassir Navab","doi":"10.1007/s11548-025-03336-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03336-x","url":null,"abstract":"<p><strong>Purpose: </strong>The multi-modality imaging system offers optimal fused images for safe and precise interventions in modern clinical practices, such as computed tomography-ultrasound (CT-US) guidance for needle insertion. However, the limited dexterity and mobility of current imaging devices hinder their integration into standardized workflows and the advancement toward fully autonomous intervention systems. In this paper, we present a novel clinical setup where robotic cone beam computed tomography (CBCT) and robotic US are pre-calibrated and dynamically co-registered, enabling new clinical applications. This setup allows registration-free rigid registration, facilitating multi-modal guided procedures in the absence of tissue deformation.</p><p><strong>Methods: </strong>First, a one-time pre-calibration is performed between the systems. To ensure a safe insertion path by highlighting critical vasculature on the 3D CBCT, SAM2 segments vessels from B-mode images, using the Doppler signal as an autonomously generated prompt. Based on the registration, the Doppler image or segmented vessel masks are then mapped onto the CBCT, creating an optimally fused image with comprehensive detail. To validate the system, we used a specially designed phantom, featuring lesions covered by ribs and multiple vessels with simulated moving flow.</p><p><strong>Results: </strong>The mapping error between US and CBCT resulted in an average deviation of <math><mrow><mn>1.72</mn> <mo>±</mo> <mn>0.62</mn></mrow> </math> mm. A user study demonstrated the effectiveness of CBCT-US fusion for needle insertion guidance, showing significant improvements in time efficiency, accuracy, and success rate. Needle intervention performance improved by approximately 50% compared to the conventional US-guided workflow.</p><p><strong>Conclusion: </strong>We present the first robotic dual-modality imaging system designed to guide clinical applications. The results show significant performance improvements compared to traditional manual interventions.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143617729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic diagnosis of abdominal pathologies in untrimmed ultrasound videos.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-03-11 DOI: 10.1007/s11548-025-03334-z
Güinther Saibro, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Michele Diana, Alexandre Hostettler, Toby Collins
{"title":"Automatic diagnosis of abdominal pathologies in untrimmed ultrasound videos.","authors":"Güinther Saibro, Yvonne Keeza, Benoît Sauer, Jacques Marescaux, Michele Diana, Alexandre Hostettler, Toby Collins","doi":"10.1007/s11548-025-03334-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03334-z","url":null,"abstract":"<p><strong>Purpose: </strong>Despite major advances in Computer Assisted Diagnosis (CAD), the need for carefully labeled training data remains an important clinical translation barrier. This work aims to overcome this barrier for ultrasound video-based CAD, using video-level classification labels combined with a novel training strategy to improve the generalization performance of state-of-the-art (SOTA) video classifiers.</p><p><strong>Methods: </strong>SOTA video classifiers were trained and evaluated on a novel ultrasound video dataset of liver and kidney pathologies, and they all struggled to generalize, especially for kidney pathologies. A new training strategy is presented, wherein a frame relevance assessor is trained to score the video frames in a video by diagnostic relevance. This is used to automatically generate diagnostically-relevant video clips (DR-Clips), which guide a video classifier during training and inference.</p><p><strong>Results: </strong>Using DR-Clips with a Video Swin Transformer, we achieved a 0.92 ROC-AUC for kidney pathology detection in videos, compared to 0.72 ROC-AUC with a Swin Transformer and standard video clips. For liver steatosis detection, due to the diffuse nature of the pathology, the Video Swin Transformer, and other video classifiers, performed similarly well, generally exceeding a 0.92 ROC-AUC.</p><p><strong>Conclusion: </strong>In theory, video classifiers, such as video transformers, should be able to solve ultrasound CAD tasks with video labels. However, in practice, video labels provide weaker supervision compared to image labels, resulting in worse generalization, as demonstrated. The additional frame guidance provided by DR-Clips enhances performance significantly. The results highlight current limits and opportunities to improve frame guidance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced self-supervised monocular depth estimation with self-attention and joint depth-pose loss for laparoscopic images. 利用腹腔镜图像的自我关注和联合深度姿态损失,增强自我监督单目深度估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-28 DOI: 10.1007/s11548-025-03332-1
Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori
{"title":"Enhanced self-supervised monocular depth estimation with self-attention and joint depth-pose loss for laparoscopic images.","authors":"Wenda Li, Yuichiro Hayashi, Masahiro Oda, Takayuki Kitasaka, Kazunari Misawa, Kensaku Mori","doi":"10.1007/s11548-025-03332-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03332-1","url":null,"abstract":"<p><strong>Purpose: </strong>Depth estimation is a powerful tool for navigation in laparoscopic surgery. Previous methods utilize predicted depth maps and the relative poses of the camera to accomplish self-supervised depth estimation. However, the smooth surfaces of organs with textureless regions and the laparoscope's complex rotations make depth and pose estimation difficult in laparoscopic scenes. Therefore, we propose a novel and effective self-supervised monocular depth estimation method with self-attention-guided pose estimation and a joint depth-pose loss function for laparoscopic images.</p><p><strong>Methods: </strong>We extract feature maps and calculate the minimum re-projection error as a feature-metric loss to establish constraints based on feature maps with more meaningful representations. Moreover, we introduce the self-attention block in the pose estimation network to predict rotations and translations of the relative poses. In addition, we minimize the difference between predicted relative poses as the pose loss. We combine all of the losses as a joint depth-pose loss.</p><p><strong>Results: </strong>The proposed method is extensively evaluated using SCARED and Hamlyn datasets. Quantitative results show that the proposed method achieves improvements of about 18.07 <math><mo>%</mo></math> and 14.00 <math><mo>%</mo></math> in the absolute relative error when combining all of the proposed components for depth estimation on SCARED and Hamlyn datasets. The qualitative results show that the proposed method produces smooth depth maps with low error in various laparoscopic scenes. The proposed method also exhibits a trade-off between computational efficiency and performance.</p><p><strong>Conclusion: </strong>This study considers the characteristics of laparoscopic datasets and presents a simple yet effective self-supervised monocular depth estimation. We propose a joint depth-pose loss function based on the extracted feature for depth estimation on laparoscopic images guided by a self-attention block. The experimental results prove that all of the proposed components contribute to the proposed method. Furthermore, the proposed method strikes an efficient balance between computational efficiency and performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143531055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SfMDiffusion: self-supervised monocular depth estimation in endoscopy based on diffusion models.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-24 DOI: 10.1007/s11548-025-03333-0
Yu Li, Da Chang, Die Luo, Jin Huang, Lan Dong, Du Wang, Liye Mei, Cheng Lei
{"title":"SfMDiffusion: self-supervised monocular depth estimation in endoscopy based on diffusion models.","authors":"Yu Li, Da Chang, Die Luo, Jin Huang, Lan Dong, Du Wang, Liye Mei, Cheng Lei","doi":"10.1007/s11548-025-03333-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03333-0","url":null,"abstract":"<p><strong>Purpose: </strong>In laparoscopic surgery, accurate 3D reconstruction from endoscopic video is crucial for effective image-guided techniques. Current methods for monocular depth estimation (MDE) face challenges in complex surgical scenes, including limited training data, specular reflections, and varying illumination conditions.</p><p><strong>Methods: </strong>We propose SfMDiffusion, a novel diffusion-based self-supervised framework for MDE. Our approach combines: (1) a denoising diffusion process guided by pseudo-ground-truth depth maps, (2) knowledge distillation from a pre-trained teacher model, and (3) discriminative priors to enhance estimation robustness. Our design enables accurate depth estimation without requiring ground-truth depth data during training.</p><p><strong>Results: </strong>Experiments on the SCARED and Hamlyn datasets demonstrate that SfMDiffusion achieves superior performance: an Absolute relative error (Abs Rel) of 0.049, a Squared relative error (Sq Rel) of 0.366, and a Root Mean Square Error (RMSE) of 4.305 on SCARED dataset, and Abs Rel of 0.067, Sq Rel of 0.800, and RMSE of 7.465 on Hamlyn dataset.</p><p><strong>Conclusion: </strong>SfMDiffusion provides an innovative approach for 3D reconstruction in image-guided surgical techniques. Future work will focus on computational optimization and validation across diverse surgical scenarios. Our code is available at https://github.com/Skylanding/SfM-Diffusion .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-dimensional consistency learning between 2D Swin U-Net and 3D U-Net for intestine segmentation from CT volume.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-22 DOI: 10.1007/s11548-024-03252-6
Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hiroo Uchida, Akinari Hinoki, Kojiro Suzuki, Aitaro Takimoto, Masahiro Oda, Kensaku Mori
{"title":"Multi-dimensional consistency learning between 2D Swin U-Net and 3D U-Net for intestine segmentation from CT volume.","authors":"Qin An, Hirohisa Oda, Yuichiro Hayashi, Takayuki Kitasaka, Hiroo Uchida, Akinari Hinoki, Kojiro Suzuki, Aitaro Takimoto, Masahiro Oda, Kensaku Mori","doi":"10.1007/s11548-024-03252-6","DOIUrl":"https://doi.org/10.1007/s11548-024-03252-6","url":null,"abstract":"<p><strong>Purpose: </strong>The paper introduces a novel two-step network based on semi-supervised learning for intestine segmentation from CT volumes. The intestine folds in the abdomen with complex spatial structures and contact with neighboring organs that bring difficulty for accurate segmentation and labeling at the pixel level. We propose a multi-dimensional consistency learning method to reduce the insufficient intestine segmentation results caused by complex structures and the limited labeled dataset.</p><p><strong>Methods: </strong>We designed a two-stage model to segment the intestine. In stage 1, a 2D Swin U-Net is trained using labeled data to generate pseudo-labels for unlabeled data. In stage 2, a 3D U-Net is trained using labeled and unlabeled data to create the final segmentation model. The model comprises two networks from different dimensions, capturing more comprehensive representations of the intestine and potentially enhancing the model's performance in intestine segmentation.</p><p><strong>Results: </strong>We used 59 CT volumes to validate the effectiveness of our method. The experiment was repeated three times getting the average as the final result. Compared to the baseline method, our method improved 3.25% Dice score and 6.84% recall rate.</p><p><strong>Conclusion: </strong>The proposed method is based on semi-supervised learning and involves training both 2D Swin U-Net and 3D U-Net. The method mitigates the impact of limited labeled data and maintains consistncy of multi-dimensional outputs from the two networks to improve the segmentation accuracy. Compared to previous methods, our method demonstrates superior segmentation performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143477169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
TRUSWorthy: toward clinically applicable deep learning for confident detection of prostate cancer in micro-ultrasound.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-20 DOI: 10.1007/s11548-025-03335-y
Mohamed Harmanani, Paul F R Wilson, Minh Nguyen Nhat To, Mahdi Gilany, Amoon Jamzad, Fahimeh Fooladgar, Brian Wodlinger, Purang Abolmaesumi, Parvin Mousavi
{"title":"TRUSWorthy: toward clinically applicable deep learning for confident detection of prostate cancer in micro-ultrasound.","authors":"Mohamed Harmanani, Paul F R Wilson, Minh Nguyen Nhat To, Mahdi Gilany, Amoon Jamzad, Fahimeh Fooladgar, Brian Wodlinger, Purang Abolmaesumi, Parvin Mousavi","doi":"10.1007/s11548-025-03335-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03335-y","url":null,"abstract":"<p><strong>Purpose: </strong>While deep learning methods have shown great promise in improving the effectiveness of prostate cancer (PCa) diagnosis by detecting suspicious lesions from trans-rectal ultrasound (TRUS), they must overcome multiple simultaneous challenges. There is high heterogeneity in tissue appearance, significant class imbalance in favor of benign examples, and scarcity in the number and quality of ground truth annotations available to train models. Failure to address even a single one of these problems can result in unacceptable clinical outcomes.</p><p><strong>Methods: </strong>We propose TRUSWorthy, a carefully designed, tuned, and integrated system for reliable PCa detection. Our pipeline integrates self-supervised learning, multiple-instance learning aggregation using transformers, random-undersampled boosting and ensembling: These address label scarcity, weak labels, class imbalance, and overconfidence, respectively. We train and rigorously evaluate our method using a large, multi-center dataset of micro-ultrasound data.</p><p><strong>Results: </strong>Our method outperforms previous state-of-the-art deep learning methods in terms of accuracy and uncertainty calibration, with AUROC and balanced accuracy scores of 79.9% and 71.5%, respectively. On the top 20% of predictions with the highest confidence, we can achieve a balanced accuracy of up to 91%.</p><p><strong>Conclusion: </strong>The success of TRUSWorthy demonstrates the potential of integrated deep learning solutions to meet clinical needs in a highly challenging deployment setting, and is a significant step toward creating a trustworthy system for computer-assisted PCa diagnosis.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143460333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mathematical methods for assessing the accuracy of pre-planned and guided surgical osteotomies.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-19 DOI: 10.1007/s11548-025-03324-1
George R Nahass, Nicolas Kaplan, Isabel Scharf, Devansh Saini, Naji Bou Zeid, Sobhi Kazmouz, Linping Zhao, Lee W T Alkureishi
{"title":"Mathematical methods for assessing the accuracy of pre-planned and guided surgical osteotomies.","authors":"George R Nahass, Nicolas Kaplan, Isabel Scharf, Devansh Saini, Naji Bou Zeid, Sobhi Kazmouz, Linping Zhao, Lee W T Alkureishi","doi":"10.1007/s11548-025-03324-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03324-1","url":null,"abstract":"<p><strong>Purpose: </strong>The fibula-free flap (FFF) is a valuable reconstructive technique in maxillofacial surgery; however, the assessment of osteotomy accuracy remains challenging. We devised two novel methodologies to compare planned and postoperative osteotomies in FFF reconstructions that minimized user input but would still generalize to other operations involving the analysis of osteotomies.</p><p><strong>Methods: </strong>Our approaches leverage basic mathematics to derive both quantitative and qualitative insights about the relationship of the postoperative osteotomy to the planned model. We have coined our methods 'analysis by a shared reference angle' and 'Euler angle analysis.'</p><p><strong>Results: </strong>In addition to describing our algorithm and the clinical utility, we present a thorough validation of both methods. Our algorithm is highly repeatable in an intraobserver repeatability test and provides information about the overall accuracy as well as geometric specifics of the deviation from the planned reconstruction.</p><p><strong>Conclusion: </strong>Our algorithm is a novel and robust method for assessing the osteotomy accuracy of FFF reconstructions. This approach has no reliance on the overall position of the reconstruction, which is valuable due to the multiple factors that may influence the outcome of FFF reconstructions. Additionally, while our approach relies on anatomical features for landmark selections, the flexibility in our approach makes it applicable to evaluate any operation involving osteotomies.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143450396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic future remnant segmentation in liver resection planning.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-17 DOI: 10.1007/s11548-025-03331-2
Hicham Messaoudi, Marwan Abbas, Bogdan Badic, Douraied Ben Salem, Ahror Belaid, Pierre-Henri Conze
{"title":"Automatic future remnant segmentation in liver resection planning.","authors":"Hicham Messaoudi, Marwan Abbas, Bogdan Badic, Douraied Ben Salem, Ahror Belaid, Pierre-Henri Conze","doi":"10.1007/s11548-025-03331-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03331-2","url":null,"abstract":"<p><strong>Purpose: </strong>Liver resection is a complex procedure requiring precise removal of tumors while preserving viable tissue. This study proposes a novel approach for automated liver resection planning, using segmentations of the liver, vessels, and tumors from CT scans to predict the future liver remnant (FLR), aiming to improve pre-operative planning accuracy and patient outcomes.</p><p><strong>Methods: </strong>This study evaluates deep convolutional and Transformer-based networks under various computational setups. Using different combinations of anatomical and pathological delineation masks, we assess the contribution of each structure. The method is initially tested with ground-truth masks for feasibility and later validated with predicted masks from a deep learning model.</p><p><strong>Results: </strong>The experimental results highlight the crucial importance of incorporating anatomical and pathological masks for accurate FLR delineation. Among the tested configurations, the best performing model achieves an average Dice score of approximately 0.86, aligning closely with the inter-observer variability reported in the literature. Additionally, the model achieves an average symmetric surface distance of 0.95 mm, demonstrating its precision in capturing fine-grained structural details critical for pre-operative planning.</p><p><strong>Conclusion: </strong>This study highlights the potential for fully-automated FLR segmentation pipelines in liver pre-operative planning. Our approach holds promise for developing a solution to reduce the time and variability associated with manual delineation. Such method can provide better decision-making in liver resection planning by providing accurate and consistent segmentation results. Future studies should explore its seamless integration into clinical workflows.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143442785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking barriers: noninvasive AI model for BRAFV600E mutation identification.
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-02-15 DOI: 10.1007/s11548-024-03290-0
Fan Wu, Xiangfeng Lin, Yuying Chen, Mengqian Ge, Ting Pan, Jingjing Shi, Linlin Mao, Gang Pan, You Peng, Li Zhou, Haitao Zheng, Dingcun Luo, Yu Zhang
{"title":"Breaking barriers: noninvasive AI model for BRAF<sup>V600E</sup> mutation identification.","authors":"Fan Wu, Xiangfeng Lin, Yuying Chen, Mengqian Ge, Ting Pan, Jingjing Shi, Linlin Mao, Gang Pan, You Peng, Li Zhou, Haitao Zheng, Dingcun Luo, Yu Zhang","doi":"10.1007/s11548-024-03290-0","DOIUrl":"https://doi.org/10.1007/s11548-024-03290-0","url":null,"abstract":"<p><strong>Objective: </strong>BRAF<sup>V600E</sup> is the most common mutation found in thyroid cancer and is particularly associated with papillary thyroid carcinoma (PTC). Currently, genetic mutation detection relies on invasive procedures. This study aimed to extract radiomic features and utilize deep transfer learning (DTL) from ultrasound images to develop a noninvasive artificial intelligence model for identifying BRAF<sup>V600E</sup> mutations.</p><p><strong>Materials and methods: </strong>Regions of interest (ROI) were manually annotated in the ultrasound images, and radiomic and DTL features were extracted. These were used in a joint DTL-radiomics (DTLR) model. Fourteen DTL models were employed, and feature selection was performed using the LASSO regression. Eight machine learning methods were used to construct predictive models. Model performance was primarily evaluated using area under the curve (AUC), accuracy, sensitivity and specificity. The interpretability of the model was visualized using gradient-weighted class activation maps (Grad-CAM).</p><p><strong>Results: </strong>Sole reliance on radiomics for identification of BRAF<sup>V600E</sup> mutations had limited capability, but the optimal DTLR model, combined with ResNet152, effectively identified BRAF<sup>V600E</sup> mutations. In the validation set, the AUC, accuracy, sensitivity and specificity were 0.833, 80.6%, 76.2% and 81.7%, respectively. The AUC of the DTLR model was higher than that of the DTL and radiomics models. Visualization using the ResNet152-based DTLR model revealed its ability to capture and learn ultrasound image features related to BRAF<sup>V600E</sup> mutations.</p><p><strong>Conclusion: </strong>The ResNet152-based DTLR model demonstrated significant value in identifying BRAF<sup>V600E</sup> mutations in patients with PTC using ultrasound images. Grad-CAM has the potential to objectively stratify BRAF mutations visually. The findings of this study require further collaboration among more centers and the inclusion of additional data for validation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信