International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
A multi-model deep learning approach for the identification of coronary artery calcifications within 2D coronary angiography images. 二维冠状动脉造影图像中冠状动脉钙化的多模型深度学习识别方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-08 DOI: 10.1007/s11548-025-03382-5
Edoardo De Rose, Ciro Benito Raggio, Ahmad Riccardo Rasheed, Pierangela Bruno, Paolo Zaffino, Salvatore De Rosa, Francesco Calimeri, Maria Francesca Spadea
{"title":"A multi-model deep learning approach for the identification of coronary artery calcifications within 2D coronary angiography images.","authors":"Edoardo De Rose, Ciro Benito Raggio, Ahmad Riccardo Rasheed, Pierangela Bruno, Paolo Zaffino, Salvatore De Rosa, Francesco Calimeri, Maria Francesca Spadea","doi":"10.1007/s11548-025-03382-5","DOIUrl":"https://doi.org/10.1007/s11548-025-03382-5","url":null,"abstract":"<p><strong>Purpose: </strong>Identifying and quantifying coronary artery calcification (CAC) is crucial for preoperative planning, as it helps to estimate both the complexity of the 2D coronary angiography (2DCA) procedure and the risk of developing intraoperative complications. Despite the relevance, the actual practice relies upon visual inspection of the 2DCA image frames by clinicians. This procedure is prone to inaccuracies due to the poor contrast and small size of the CAC; moreover, it is dependent on the physician's experience. To address this issue, we developed a workflow to assist clinicians in identifying CAC within 2DCA using data from 44 image acquisitions across 14 patients.</p><p><strong>Methods: </strong>Our workflow consists of three stages. In the first stage, a classification backbone based on ResNet-18 is applied to guide the CAC identification by extracting relevant features from 2DCA frames. In the second stage, a U-Net decoder architecture, mirroring the encoding structure of the ResNet-18, is employed to identify the regions of interest (ROI) of the CAC. Eventually, a post-processing step refines the results to obtain the final ROI. The workflow was evaluated using a leave-out cross-validation.</p><p><strong>Results: </strong>The proposed method outperformed the comparative methods by achieving an F1-score for the classification step of 0.87 (0.77 <math><mo>-</mo></math> 0.94) (median ± quartiles), while for the CAC identification step the intersection over minimum (IoM) was 0.64 (0.46 <math><mo>-</mo></math> 0.86) (median ± quartiles).</p><p><strong>Conclusion: </strong>This is the first attempt to propose a clinical decision support system to assist the identification of CAC within 2DCA. The proposed workflow holds the potential to improve both the accuracy and efficiency of CAC quantification, with promising clinical applications. As future work, the concurrent use of multiple auxiliary tasks could be explored to further improve the segmentation performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An augmented reality overlay for navigated prostatectomy using fiducial-free 2D-3D registration. 增强现实覆盖导航前列腺切除术使用无基准2D-3D注册。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-08 DOI: 10.1007/s11548-025-03374-5
Johannes Bender, Jeremy Kwe, Benedikt Hoeh, Katharina Boehm, Ivan Platzek, Angelika Borkowetz, Stefanie Speidel, Micha Pfeiffer
{"title":"An augmented reality overlay for navigated prostatectomy using fiducial-free 2D-3D registration.","authors":"Johannes Bender, Jeremy Kwe, Benedikt Hoeh, Katharina Boehm, Ivan Platzek, Angelika Borkowetz, Stefanie Speidel, Micha Pfeiffer","doi":"10.1007/s11548-025-03374-5","DOIUrl":"https://doi.org/10.1007/s11548-025-03374-5","url":null,"abstract":"<p><strong>Purpose: </strong>Markerless navigation in minimally invasive surgery is still an unsolved challenge. Many proposed navigation systems for minimally invasive surgeries rely on stereoscopic images, while in clinical practice oftentimes monocular endoscopes are used. Combined with the lack of automatic video-based navigation systems for prostatectomies, this paper explores methods to tackle both research gaps at the same time for robot-assisted prostatectomies.</p><p><strong>Methods: </strong>In order to realize a semi-automatic augmented reality overlay for navigated prostatectomy, the camera pose w.r.t. the prostate needs to be estimated. We developed a method where visual cues are drawn on top of the organ after an initial manual alignment, simultaneously creating matching landmarks on the 2D and 3D data. Starting from this key frame, the cues are then tracked in the endoscopic video. Both PnPRansac and differentiable rendering are then explored to perform 2D-3D registration for each frame.</p><p><strong>Results: </strong>We performed experiments on synthetic and in vivo data. On synthetic data differentiable rendering can achieve a median target registration error of 6.11 mm. Both PnPRansac and differentiable rendering are feasible methods for 2D-3D registration.</p><p><strong>Conclusion: </strong>We demonstrated a video-based markerless augmented reality overlay for navigated prostatectomy, using visual cues as an anchor.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143990622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reflecting topology consistency and abnormality via learnable attentions for airway labeling. 通过可学习注意反映拓扑一致性和异常的气道标记。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-06 DOI: 10.1007/s11548-025-03368-3
Chenyu Li, Minghui Zhang, Chuyan Zhang, Yun Gu
{"title":"Reflecting topology consistency and abnormality via learnable attentions for airway labeling.","authors":"Chenyu Li, Minghui Zhang, Chuyan Zhang, Yun Gu","doi":"10.1007/s11548-025-03368-3","DOIUrl":"https://doi.org/10.1007/s11548-025-03368-3","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate airway anatomical labeling is crucial for clinicians to identify and navigate complex bronchial structures during bronchoscopy. Automatic airway labeling is challenging due to significant anatomical variations. Previous methods are prone to generate inconsistent predictions, hindering preoperative planning and intraoperative navigation. This paper aims to enhance topological consistency and improve the detection of abnormal airway branches.</p><p><strong>Methods: </strong>We propose a transformer-based framework incorporating two modules: the soft subtree consistency (SSC) and the abnormal branch saliency (ABS). The SSC module constructs a soft subtree to capture clinically relevant topological relationships, allowing for flexible feature aggregation within and across subtrees. The ABS module facilitates interaction between node features and prototypes to distinguish abnormal branches, preventing the erroneous features aggregation between normal and abnormal nodes.</p><p><strong>Results: </strong>Evaluated on a challenging dataset characterized by severe airway deformities, our method achieves superior performance compared to state-of-the-art approaches. Specifically, it attains an 83.7% subsegmental accuracy, along with a 3.1% increase in segmental subtree consistency, a 45.2% increase in abnormal branch recall. Notably, the method demonstrates robust performance in cases with airway deformities, ensuring consistent and accurate labeling.</p><p><strong>Conclusion: </strong>The enhanced topological consistency and robust identification of abnormal branches provided by our method offer an accurate and robust solution for airway labeling, with potential to improve the precision and safety of bronchoscopy procedures.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144060011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards automatic quantification of operating table interaction in operating rooms. 迈向手术室手术台交互作用的自动量化。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-04 DOI: 10.1007/s11548-025-03363-8
Rick M Butler, Anne M Schouten, Anne C van der Eijk, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen
{"title":"Towards automatic quantification of operating table interaction in operating rooms.","authors":"Rick M Butler, Anne M Schouten, Anne C van der Eijk, Maarten van der Elst, Benno H W Hendriks, John J van den Dobbelsteen","doi":"10.1007/s11548-025-03363-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03363-8","url":null,"abstract":"<p><strong>Purpose: </strong>Perioperative staff shortages are a problem in hospitals worldwide. Keeping the staff content and motivated is a challenge in the busy hospital setting of today. New operating room technologies aim to increase safety and efficiency. This causes a shift from interaction with patients to interaction with technology. Objectively measuring this shift could aid the design of supportive technological products, or optimal planning for high-tech procedures.</p><p><strong>Methods: </strong>35 Gynaecological procedures of three different technology levels are recorded: open- (OS), minimally invasive- (MIS) and robot-assisted (RAS) surgery. We annotate interaction between staff and the patient. An algorithm is proposed that detects interaction with the operating table from staff posture and movement. Interaction is expressed as a percentage of total working time.</p><p><strong>Results: </strong>The proposed algorithm measures operating table interactions of 70.4%, 70.3% and 30.1% during OS, MIS and RAS. Annotations yield patient interaction percentages of 37.6%, 38.3% and 24.6%. Algorithm measurements over time show operating table and patient interaction peaks at anomalous events or workflow phase transitions.</p><p><strong>Conclusions: </strong>The annotations show less operating table and patient interactions during RAS than OS and MIS. Annotated patient interaction and measured operating table interaction show similar differences between procedures and workflow phases. The visual complexity of operating rooms complicates pose tracking, deteriorating the algorithm input quality. The proposed algorithm shows promise as a component in context-aware event- or workflow phase detection.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model-based deep learning with fully connected neural networks for accelerated magnetic resonance parameter mapping. 基于模型的深度学习与全连接神经网络加速磁共振参数映射。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-03 DOI: 10.1007/s11548-025-03356-7
Naoto Fujita, Suguru Yokosawa, Toru Shirai, Yasuhiko Terada
{"title":"Model-based deep learning with fully connected neural networks for accelerated magnetic resonance parameter mapping.","authors":"Naoto Fujita, Suguru Yokosawa, Toru Shirai, Yasuhiko Terada","doi":"10.1007/s11548-025-03356-7","DOIUrl":"https://doi.org/10.1007/s11548-025-03356-7","url":null,"abstract":"<p><strong>Purpose: </strong>Quantitative magnetic resonance imaging (qMRI) enables imaging of physical parameters related to the nuclear spin of protons in tissue, and is poised to revolutionize clinical research. However, improving the accuracy and clinical relevance of qMRI is essential for its practical implementation. This requires significantly reducing the currently lengthy acquisition times to enable clinical examinations and provide an environment where clinical accuracy and reliability can be verified. Deep learning (DL) has shown promise in significantly reducing imaging time and improving image quality in recent years. This study introduces a novel approach, quantitative deep cascade of convolutional network (qDC-CNN), as a framework for accelerated quantitative parameter mapping, offering a potential solution to this challenge. This work aims to verify that the proposed model outperforms the competing methods.</p><p><strong>Methods: </strong>The proposed qDC-CNN is an integrated deep-learning framework combining an unrolled image reconstruction network and a fully connected neural network for parameter estimation. Training and testing utilized simulated multi-slice multi-echo (MSME) datasets generated from the BrainWeb database. The reconstruction error with ground truth was evaluated using normalized root mean squared error (NRMSE) and compared with conventional DL-based methods. Two validation experiments were performed: (Experiment 1) assessment of acceleration factor (AF) dependency (AF = 5, 10, 20) with fixed 16 echoes, and (Experiment 2) evaluation of the impact of reducing contrast images (16, 8, 4 images).</p><p><strong>Results: </strong>In most cases, the NRMSE values of S0 and T2 estimated from the proposed qDC-CNN were within 10%. In particular, the NRMSE values of T2 were much smaller than those of the conventional methods.</p><p><strong>Conclusions: </strong>The proposed model had significantly smaller reconstruction errors than the conventional models. The proposed method can be applied to other qMRI sequences and has the flexibility to replace the image reconstruction module to improve performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144031651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic ultrasound image alignment for diagnosis of pediatric distal forearm fractures. 自动超声图像对齐诊断小儿前臂远端骨折。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-02 DOI: 10.1007/s11548-025-03361-w
Peng Liu, Yujia Hu, Jurek Schultz, Jinjing Xu, Christoph von Schrottenberg, Philipp Schwerk, Josephine Pohl, Guido Fitze, Stefanie Speidel, Micha Pfeiffer
{"title":"Automatic ultrasound image alignment for diagnosis of pediatric distal forearm fractures.","authors":"Peng Liu, Yujia Hu, Jurek Schultz, Jinjing Xu, Christoph von Schrottenberg, Philipp Schwerk, Josephine Pohl, Guido Fitze, Stefanie Speidel, Micha Pfeiffer","doi":"10.1007/s11548-025-03361-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03361-w","url":null,"abstract":"<p><strong>Purpose: </strong>The study aims to develop an automatic method to align ultrasound images of the distal forearm for diagnosing pediatric fractures. This approach seeks to bypass the reliance on X-rays for fracture diagnosis, thereby minimizing radiation exposure and making the process less painful, as well as creating a more child-friendly diagnostic pathway.</p><p><strong>Methods: </strong>We present a fully automatic pipeline to align paired POCUS images. We first leverage a deep learning model to delineate bone boundaries, from which we obtain key anatomical landmarks. These landmarks are finally used to guide the optimization-based alignment process, for which we propose three optimization constraints: aligning specific points, ensuring parallel orientation of the bone segments, and matching the bone widths.</p><p><strong>Results: </strong>The method demonstrated high alignment accuracy compared to reference X-rays in terms of boundary distances. A morphology experiment including fracture classification and angulation measurement presents comparable performance when based on the merged ultrasound images and conventional X-rays, justifying the effectiveness of our method in these cases.</p><p><strong>Conclusions: </strong>The study introduced an effective and fully automatic pipeline for aligning ultrasound images, showing potential to replace X-rays for diagnosing pediatric distal forearm fractures. Initial tests show that surgeons find many of our results sufficient for diagnosis. Future work will focus on increasing dataset size to improve diagnostic accuracy and reliability.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fast and robust geometric point cloud registration model for orthopedic surgery with noisy and incomplete data. 基于噪声和不完整数据的骨科手术几何点云配准模型。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-02 DOI: 10.1007/s11548-025-03387-0
Jiashi Zhao, Zihan Xu, Fei He, Jianhua Liu, Zhengang Jiang
{"title":"A fast and robust geometric point cloud registration model for orthopedic surgery with noisy and incomplete data.","authors":"Jiashi Zhao, Zihan Xu, Fei He, Jianhua Liu, Zhengang Jiang","doi":"10.1007/s11548-025-03387-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03387-0","url":null,"abstract":"<p><strong>Purpose: </strong>Accurate registration of partial-to-partial point clouds is crucial in computer-assisted orthopedic surgery but faces challenges due to incomplete data, noise, and partial overlap. This paper proposes a novel geometric fast registration (GFR) model that addresses these issues through three core modules: point extractor registration (PER), dual attention transformer (DAT), and geometric feature matching (GFM).</p><p><strong>Methods: </strong>PER operates within the frequency domain to enhance point cloud data by attenuating noise and reconstructing incomplete regions. DAT augments feature representation by correlating independent features from source and target point clouds, improving model expressiveness. GFM identifies geometrically consistent point pairs, completing missing data and refining registration accuracy.</p><p><strong>Results: </strong>We conducted experiments using the clinical bone dataset of 1432 distinct human skeletal samples, comprising ribs, scapulae, and fibula. The proposed model exhibited remarkable robustness and versatility, demonstrating consistent performance across diverse bone structures. When evaluated to noisy, partial-to-partial point clouds with incomplete bone data, the model achieved a mean squared error of 3.57 for rotation and a mean absolute error of 1.29. The mean squared error for translation was 0.002, with a mean absolute error of 0.038.</p><p><strong>Conclusion: </strong>Our proposed GFR model exhibits exceptional speed and universality, effectively handling point clouds with defects, noise, and partial overlap. Extensive experiments conducted on bone datasets demonstrate the superior performance of our model compared to state-of-the-art methods. The code is publicly available at https://github.com/xzh128/PER .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144007642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth-based registration of 3D preoperative models to intraoperative patient anatomy using the HoloLens 2. 使用 HoloLens 2 将三维术前模型与术中患者解剖结构进行基于深度的配准。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-01 Epub Date: 2025-03-14 DOI: 10.1007/s11548-025-03328-x
Enzo Kerkhof, Abdullah Thabit, Mohamed Benmahdjoub, Pierre Ambrosini, Tessa van Ginhoven, Eppo B Wolvius, Theo van Walsum
{"title":"Depth-based registration of 3D preoperative models to intraoperative patient anatomy using the HoloLens 2.","authors":"Enzo Kerkhof, Abdullah Thabit, Mohamed Benmahdjoub, Pierre Ambrosini, Tessa van Ginhoven, Eppo B Wolvius, Theo van Walsum","doi":"10.1007/s11548-025-03328-x","DOIUrl":"10.1007/s11548-025-03328-x","url":null,"abstract":"<p><strong>Purpose: </strong>In augmented reality (AR) surgical navigation, a registration step is required to align the preoperative data with the patient. This work investigates the use of the depth sensor of HoloLens 2 for registration in surgical navigation.</p><p><strong>Methods: </strong>An AR depth-based registration framework was developed. The framework aligns preoperative and intraoperative point clouds and overlays the preoperative model on the patient. For evaluation, three experiments were conducted. First, the accuracy of the HoloLens's depth sensor was evaluated for both Long-Throw (LT) and Articulated Hand Tracking (AHAT) modes. Second, the overall registration accuracy was assessed with different alignment approaches. The accuracy and success rate of each approach were evaluated. Finally, a qualitative assessment of the framework was performed on various objects.</p><p><strong>Results: </strong>The depth accuracy experiment showed mean overestimation errors of 5.7 mm for AHAT and 9.0 mm for LT. For the overall alignment, the mean translation errors of the different methods ranged from 12.5 to 17.0 mm, while rotation errors ranged from 0.9 to 1.1 degrees.</p><p><strong>Conclusion: </strong>The results show that the depth sensor on the HoloLens 2 can be used for image-to-patient alignment with 1-2 cm accuracy and within 4 s, indicating that with further improvement in the accuracy, this approach can offer a convenient alternative to other time-consuming marker-based approaches. This work provides a generic marker-less registration framework using the depth sensor of the HoloLens 2, with extensive analysis of the sensor's reconstruction and registration accuracy. It supports advancing the research of marker-less registration in surgical navigation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"901-912"},"PeriodicalIF":2.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12055921/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-driven method for safe and effective ERCP cannulation. 一种安全有效的ERCP插管深度学习驱动方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-01 Epub Date: 2025-02-07 DOI: 10.1007/s11548-025-03329-w
Yuying Liu, Xin Chen, Siyang Zuo
{"title":"A deep learning-driven method for safe and effective ERCP cannulation.","authors":"Yuying Liu, Xin Chen, Siyang Zuo","doi":"10.1007/s11548-025-03329-w","DOIUrl":"10.1007/s11548-025-03329-w","url":null,"abstract":"<p><strong>Purpose: </strong>In recent years, the detection of the duodenal papilla and surgical cannula has become a critical task in computer-assisted endoscopic retrograde cholangiopancreatography (ERCP) cannulation operations. The complex surgical anatomy, coupled with the small size of the duodenal papillary orifice and its high similarity to the background, poses significant challenges to effective computer-assisted cannulation. To address these challenges, we present a deep learning-driven graphical user interface (GUI) to assist ERCP cannulation.</p><p><strong>Methods: </strong>Considering the characteristics of the ERCP scenario, we propose a deep learning method for duodenal papilla and surgical cannula detection, utilizing four swin transformer decoupled heads (4STDH). Four different prediction heads are employed to detect objects of different sizes. Subsequently, we integrate the swin transformer module to identify attention regions to explore prediction potential deeply. Moreover, we decouple the classification and regression networks, significantly improving the model's accuracy and robustness through the separation prediction. Simultaneously, we introduce a dataset on papilla and cannula (DPAC), consisting of 1840 annotated endoscopic images, which will be publicly available. We integrated 4STDH and several state-of-the-art methods into the GUI and compared them.</p><p><strong>Results: </strong>On the DPAC dataset, 4STDH outperforms state-of-the-art methods with an mAP of 93.2% and superior generalization performance. Additionally, the GUI provides real-time positions of the papilla and cannula, along with the planar distance and direction required for the cannula to reach the cannulation position.</p><p><strong>Conclusion: </strong>We validate the GUI's performance in human gastrointestinal endoscopic videos, showing deep learning's potential to enhance the safety and efficiency of clinical ERCP cannulation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"913-922"},"PeriodicalIF":2.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143371421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Breaking barriers: noninvasive AI model for BRAFV600E mutation identification. 突破障碍:BRAFV600E突变鉴定的无创AI模型。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-01 Epub Date: 2025-02-15 DOI: 10.1007/s11548-024-03290-0
Fan Wu, Xiangfeng Lin, Yuying Chen, Mengqian Ge, Ting Pan, Jingjing Shi, Linlin Mao, Gang Pan, You Peng, Li Zhou, Haitao Zheng, Dingcun Luo, Yu Zhang
{"title":"Breaking barriers: noninvasive AI model for BRAF<sup>V600E</sup> mutation identification.","authors":"Fan Wu, Xiangfeng Lin, Yuying Chen, Mengqian Ge, Ting Pan, Jingjing Shi, Linlin Mao, Gang Pan, You Peng, Li Zhou, Haitao Zheng, Dingcun Luo, Yu Zhang","doi":"10.1007/s11548-024-03290-0","DOIUrl":"10.1007/s11548-024-03290-0","url":null,"abstract":"<p><strong>Objective: </strong>BRAF<sup>V600E</sup> is the most common mutation found in thyroid cancer and is particularly associated with papillary thyroid carcinoma (PTC). Currently, genetic mutation detection relies on invasive procedures. This study aimed to extract radiomic features and utilize deep transfer learning (DTL) from ultrasound images to develop a noninvasive artificial intelligence model for identifying BRAF<sup>V600E</sup> mutations.</p><p><strong>Materials and methods: </strong>Regions of interest (ROI) were manually annotated in the ultrasound images, and radiomic and DTL features were extracted. These were used in a joint DTL-radiomics (DTLR) model. Fourteen DTL models were employed, and feature selection was performed using the LASSO regression. Eight machine learning methods were used to construct predictive models. Model performance was primarily evaluated using area under the curve (AUC), accuracy, sensitivity and specificity. The interpretability of the model was visualized using gradient-weighted class activation maps (Grad-CAM).</p><p><strong>Results: </strong>Sole reliance on radiomics for identification of BRAF<sup>V600E</sup> mutations had limited capability, but the optimal DTLR model, combined with ResNet152, effectively identified BRAF<sup>V600E</sup> mutations. In the validation set, the AUC, accuracy, sensitivity and specificity were 0.833, 80.6%, 76.2% and 81.7%, respectively. The AUC of the DTLR model was higher than that of the DTL and radiomics models. Visualization using the ResNet152-based DTLR model revealed its ability to capture and learn ultrasound image features related to BRAF<sup>V600E</sup> mutations.</p><p><strong>Conclusion: </strong>The ResNet152-based DTLR model demonstrated significant value in identifying BRAF<sup>V600E</sup> mutations in patients with PTC using ultrasound images. Grad-CAM has the potential to objectively stratify BRAF mutations visually. The findings of this study require further collaboration among more centers and the inclusion of additional data for validation.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"935-947"},"PeriodicalIF":2.3,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143426641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信