International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Improved muscle and fat segmentation for body composition measures on quantitative CT. 改进的肌肉和脂肪分割的定量CT身体成分测量。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-07-01 DOI: 10.1007/s11548-025-03466-2
Jianfei Liu, Praveen Thoppey Srinivasan Balamuralikrishna, Sovira Tan, Pritam Mukherjee, Tejas Sudharshan Mathai, Perry J Pickhardt, Ronald M Summers
{"title":"Improved muscle and fat segmentation for body composition measures on quantitative CT.","authors":"Jianfei Liu, Praveen Thoppey Srinivasan Balamuralikrishna, Sovira Tan, Pritam Mukherjee, Tejas Sudharshan Mathai, Perry J Pickhardt, Ronald M Summers","doi":"10.1007/s11548-025-03466-2","DOIUrl":"10.1007/s11548-025-03466-2","url":null,"abstract":"<p><strong>Purpose: </strong>Body composition analysis on abdominal CT scans is useful for opportunistic screening. It also offers prognostic insights into mortality and cardiovascular risk. However, current segmentation methods for muscle and fat often fail on quantitative CT scans used for bone densitometry. These scans are commonly used to diagnose and monitor osteoporosis. This study aims to develop an accurate segmentation method for such scans and compare its performance with existing methods.</p><p><strong>Methods: </strong>We applied an nnU-Net framework to segment muscle, subcutaneous fat, visceral fat, and an added 'body' class for other non-background voxels. Training data included CT scans with bone densitometry phantoms, with segmentation annotations generated using our previous segmentation method followed by manual refinement. The proposed method was evaluated on 980 CT scans across two internal and external datasets, including 30 CT scans with phantoms in internal and external datasets (15 scans in each). Comparison was made with TotalSegmentator and our previous approach.</p><p><strong>Results: </strong>The proposed method achieved the highest accuracy for muscle and subcutaneous fat segmentation across all four datasets ( <math><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ) and delivered comparable accuracy for visceral fat. In comparison with TotalSegmentator and the previous method, there were no false segmentations in the densitometry phantom included within the display field-of-view of the patient scan.</p><p><strong>Conclusion: </strong>Experimental results showed that the proposed method improved segmentation accuracy for muscle and subcutaneous fat while maintaining high accuracy for visceral fat. Notably, segmentation accuracy was also high in the quantitative CT scans for bone densitometry. These findings highlight the potential of the method to advance body composition analysis in clinical practice.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1889-1898"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476392/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144546068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU-accelerated deformation mapping in hybrid organ models for real-time simulation. 混合器官模型的gpu加速变形映射实时仿真。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-07-07 DOI: 10.1007/s11548-025-03377-2
Rintaro Miyazaki, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori
{"title":"GPU-accelerated deformation mapping in hybrid organ models for real-time simulation.","authors":"Rintaro Miyazaki, Yuichiro Hayashi, Masahiro Oda, Kensaku Mori","doi":"10.1007/s11548-025-03377-2","DOIUrl":"10.1007/s11548-025-03377-2","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical simulation is expected to be an effective way for physicians and medical students to learn surgical skills. To achieve real-time deformation of soft tissues with high visual quality, multiple resolution and adaptive mesh refinement models have been introduced. However, those models require additional processing time to map the deformation results of the deformed lattice to a polygon model. In this study, we propose a method to accelerate this process using vertex shaders on GPU and investigate its performance.</p><p><strong>Methods: </strong>A hierarchical octree cube structure is generated from a high-resolution organ polygon model. The entire organ model is divided into pieces according to the cube structure. In a simulation, vertex coordinates of the organ model pieces are obtained by trilinear interpolation of the cube's 8 vertex coordinates. This process is described in a shader program, and organ model vertices are processed in the rendering pipeline for acceleration.</p><p><strong>Results: </strong>For a constant number of processing cubes, the CPU-based processing time increased linearly with the total number of organ model vertices, and the GPU-based time was nearly constant. On the other hand, for a constant number of model vertices, the GPU-based time increased linearly with the number of surface cubes. These linearities determine a condition that the GPU-based implementation is faster in the same frame time.</p><p><strong>Conclusion: </strong>We implemented octree cube deformation mapping using vertex shaders and confirmed its performance. The experimental results showed that the GPU can accelerate the mapping process in high-resolution organ models with a large number of vertices.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1785-1793"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476423/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-time 3D US-CT fusion-based semi-automatic puncture robot system: clinical evaluation. 基于US-CT实时三维融合的半自动穿刺机器人系统:临床评价。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-08-05 DOI: 10.1007/s11548-025-03489-9
Masayuki Nakayama, Bo Zhang, Ryoko Kuromatsu, Masahito Nakano, Yu Noda, Takumi Kawaguchi, Qiang Li, Yuji Maekawa, Masakatsu G Fujie, Shigeki Sugano
{"title":"Real-time 3D US-CT fusion-based semi-automatic puncture robot system: clinical evaluation.","authors":"Masayuki Nakayama, Bo Zhang, Ryoko Kuromatsu, Masahito Nakano, Yu Noda, Takumi Kawaguchi, Qiang Li, Yuji Maekawa, Masakatsu G Fujie, Shigeki Sugano","doi":"10.1007/s11548-025-03489-9","DOIUrl":"10.1007/s11548-025-03489-9","url":null,"abstract":"<p><strong>Purpose: </strong>Conventional systems supporting percutaneous radiofrequency ablation (PRFA) have faced difficulties in ensuring safe and accurate puncture due to issues inherent to the medical images used and organ displacement caused by patients' respiration. To address this problem, this study proposes a semi-automatic puncture robot system that integrates real-time ultrasound (US) images with computed tomography (CT) images. The purpose of this paper is to evaluate the system's usefulness through a pilot clinical experiment involving participants.</p><p><strong>Methods: </strong>For the clinical experiment using the proposed system, an improved U-net model based on fivefold cross-validation was constructed. Following the workflow of the proposed system, the model was trained using US images acquired from patients with robotic arms. The average Dice coefficient for the entire validation dataset was confirmed to be 0.87. Therefore, the model was implemented in the robotic system and applied to clinical experiment.</p><p><strong>Results: </strong>A clinical experiment was conducted using the robotic system equipped with the developed AI model on five adult male and female participants. The centroid distances between the point clouds from each modality were evaluated in the 3D US-CT fusion process, assuming the blood vessel centerline represents the overall structural position. The results of the centroid distances showed a minimum value of 0.38 mm, a maximum value of 4.81 mm, and an average of 1.97 mm.</p><p><strong>Conclusion: </strong>Although the five participants had different CP classifications and the derived US images exhibited individual variability, all centroid distances satisfied the ablation margin of 5.00 mm considered in PRFA, suggesting the potential accuracy and utility of the robotic system for puncture navigation. Additionally, the results suggested the potential generalization performance of the AI model trained with data acquired according to the robotic system's workflow.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1817-1827"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476430/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-task deep learning for automatic image segmentation and treatment response assessment in metastatic ovarian cancer. 多任务深度学习用于转移性卵巢癌的自动图像分割和治疗反应评估。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-09-03 DOI: 10.1007/s11548-025-03484-0
Bevis Drury, Inês P Machado, Zeyu Gao, Thomas Buddenkotte, Golnar Mahani, Gabriel Funingana, Marika Reinius, Cathal McCague, Ramona Woitek, Anju Sahdev, Evis Sala, James D Brenton, Mireia Crispin-Ortuzar
{"title":"Multi-task deep learning for automatic image segmentation and treatment response assessment in metastatic ovarian cancer.","authors":"Bevis Drury, Inês P Machado, Zeyu Gao, Thomas Buddenkotte, Golnar Mahani, Gabriel Funingana, Marika Reinius, Cathal McCague, Ramona Woitek, Anju Sahdev, Evis Sala, James D Brenton, Mireia Crispin-Ortuzar","doi":"10.1007/s11548-025-03484-0","DOIUrl":"10.1007/s11548-025-03484-0","url":null,"abstract":"<p><strong>Purpose: </strong> : High-grade serous ovarian carcinoma (HGSOC) is characterised by significant spatial and temporal heterogeneity, often presenting at an advanced metastatic stage. One of the most common treatment approaches involves neoadjuvant chemotherapy (NACT), followed by surgery. However, the multi-scale complexity of HGSOC poses a major challenge in evaluating response to NACT.</p><p><strong>Methods: </strong> : Here, we present a multi-task deep learning approach that facilitates simultaneous segmentation of pelvic/ovarian and omental lesions in contrast-enhanced computerised tomography (CE-CT) scans, as well as treatment response assessment in metastatic ovarian cancer. The model combines multi-scale feature representations from two identical U-Net architectures, allowing for an in-depth comparison of CE-CT scans acquired before and after treatment. The network was trained using 198 CE-CT images of 99 ovarian cancer patients for predicting segmentation masks and evaluating treatment response.</p><p><strong>Results: </strong> : It achieves an AUC of 0.78 (95% CI [0.70-0.91]) in an independent cohort of 98 scans of 49 ovarian cancer patients from a different institution. In addition to the classification performance, the segmentation Dice scores are only slightly lower than the current state-of-the-art for HGSOC segmentation.</p><p><strong>Conclusion: </strong> : This work is the first to demonstrate the feasibility of a multi-task deep learning approach in assessing chemotherapy-induced tumour changes across the main disease burden of patients with complex multi-site HGSOC, which could be used for treatment response evaluation and disease monitoring.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1923-1929"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476419/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144977762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anomaly detection using intraoperative iKnife data: a comparative analysis in breast cancer surgery. 术中iKnife数据异常检测在乳腺癌手术中的比较分析。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-07-29 DOI: 10.1007/s11548-025-03476-0
Olivia Radcliffe, Laura Connolly, Amoon Jamzad, Martin Kaufmann, Shaila Merchant, Jay Engel, Ross Walker, Sonal Varma, Gabor Fichtinger, John Rudan, Parvin Mousavi
{"title":"Anomaly detection using intraoperative iKnife data: a comparative analysis in breast cancer surgery.","authors":"Olivia Radcliffe, Laura Connolly, Amoon Jamzad, Martin Kaufmann, Shaila Merchant, Jay Engel, Ross Walker, Sonal Varma, Gabor Fichtinger, John Rudan, Parvin Mousavi","doi":"10.1007/s11548-025-03476-0","DOIUrl":"10.1007/s11548-025-03476-0","url":null,"abstract":"<p><strong>Purpose: </strong>Intraoperative margin assessment is crucial to ensure complete tumor removal and minimize the risk of cancer recurrence during breast-conserving surgery. The Intelligent Knife (iKnife), a mass spectrometry device that analyzes surgical smoke, shows promise in near-real-time margin evaluation. However, current AI models depend on labeled ex-vivo datasets, which are costly and time-consuming to produce. This research explores the potential of machine learning anomaly detection models to reduce reliance on labeled ex-vivo datasets by utilizing unlabeled intraoperative spectra.</p><p><strong>Methods: </strong>iKnife spectra were collected intraoperatively from 15 breast cancer surgeries. Ex-vivo samples were recorded from the resected specimen by a pathologist. Healthy samples were from the margin, and tumor samples were from the cross-section. We trained four anomaly detection methods, Isolation Forest (iForest), One Class Principal Component Analysis (OCPCA), Generalized One Class Discriminative Subspaces (GODS), and its Kernelized extension (KGODS), under two strategies: (i) intraoperative data only and (ii) intraoperative data plus healthy ex-vivo data. Performance was evaluated via four-fold cross-validation on labeled ex-vivo samples, with an additional ensemble approach on a held-out set. We compared the models to benchmark supervised classifiers and explored intraoperative feasibility with a retrospective case.</p><p><strong>Results: </strong>Using intraoperative data alone, the average balanced accuracies were 70% (iForest), 81% (OC-PCA), 77% (GODS), and 81% (KGODS) during four-fold cross-validation. Adding healthy ex-vivo data improved performance across all models; however, OC-PCA remained competitive without ex-vivo labels. On the held-out set, OC-PCA trained only on intraoperative data achieved 81% balanced accuracy, 90% sensitivity, and 72% specificity. OC-PCA was selected for intraoperative feasibility and correctly detected the tumor breach with one false positive.</p><p><strong>Conclusion: </strong>Anomaly detection models, particularly OC-PCA, can identify positive breast cancer margins with no labeled ex-vivo data. Though slightly lower in performance than supervised classifiers, they offer a promising low-resource alternative for intraoperative label generation and semi-supervised training, which can enhance clinical deployment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1953-1963"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144735172","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning. 简化放射科医师对体积医学图像的注释过程。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-06-25 DOI: 10.1007/s11548-025-03457-3
Alina Ryabtsev, Richard Lederman, Jacob Sosna, Leo Joskowicz
{"title":"Streamlining the annotation process by radiologists of volumetric medical images with few-shot learning.","authors":"Alina Ryabtsev, Richard Lederman, Jacob Sosna, Leo Joskowicz","doi":"10.1007/s11548-025-03457-3","DOIUrl":"10.1007/s11548-025-03457-3","url":null,"abstract":"<p><strong>Purpose: </strong>Radiologist's manual annotations limit robust deep learning in volumetric medical imaging. While supervised methods excel with large annotated datasets, few-shot learning performs well for large structures but struggles with small ones, such as lesions. This paper describes a novel method that leverages the advantages of both few-shot learning models and fully supervised models while reducing the cost of manual annotation.</p><p><strong>Methods: </strong>Our method inputs a small dataset of labeled scans and a large dataset of unlabeled scans and outputs a validated labeled dataset used to train a supervised model (nnU-Net). The estimated correction effort is reduced by having the radiologist correct a subset of the scan labels computed by a few-shot learning model (UniverSeg). The method uses an optimized support set of scan slice patches and prioritizes the resulting labeled scans that require the least correction. This process is repeated for the remaining unannotated scans until satisfactory performance is obtained.</p><p><strong>Results: </strong>We validated our method on liver, lung, and brain lesions on CT and MRI scans (375 scans, 5933 lesions). It significantly reduces the estimated lesion detection correction effort by 34% for missed lesions, 387% for wrongly identified lesions, with 130% fewer lesion contour corrections, and 424% fewer pixels to correct in the lesion contours with respect to manual annotation from scratch.</p><p><strong>Conclusion: </strong>Our method effectively reduces the radiologist's annotation effort of small structures to produce sufficient high-quality annotated datasets to train deep learning models. The method is generic and can be applied to a variety of lesions in various organs imaged by different modalities.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1863-1873"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476431/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Determination of Kennedy's classification in panoramic X-rays by automated tooth labeling. 用自动牙齿标记法测定全景x射线中的肯尼迪氏分类。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-06-24 DOI: 10.1007/s11548-025-03469-z
Hans Meine, Marc Christian Metzger, Patrick Weingart, Jonas Wüster, Rainer Schmelzeisen, Anna Rörich, Joachim Georgii, Leonard Simon Brandenburg
{"title":"Determination of Kennedy's classification in panoramic X-rays by automated tooth labeling.","authors":"Hans Meine, Marc Christian Metzger, Patrick Weingart, Jonas Wüster, Rainer Schmelzeisen, Anna Rörich, Joachim Georgii, Leonard Simon Brandenburg","doi":"10.1007/s11548-025-03469-z","DOIUrl":"10.1007/s11548-025-03469-z","url":null,"abstract":"<p><strong>Purpose: </strong>Panoramic X-rays (PX) are extensively utilized in dental and maxillofacial diagnostics, offering comprehensive imaging of teeth and surrounding structures. This study investigates the automatic determination of Kennedy's classification in partially edentulous jaws.</p><p><strong>Methods: </strong>A retrospective study involving 209 PX images from 206 patients was conducted. The established Mask R-CNN, a deep learning-based instance segmentation model, was trained for the automatic detection, position labeling (according to the international dental federation's scheme), and segmentation of teeth in PX. Subsequent post-processing steps filter duplicate outputs by position label and by geometric overlap. Finally, a rule-based determination of Kennedy's class of partially edentulous jaws was performed.</p><p><strong>Results: </strong>In a fivefold cross-validation, Kennedy's classification was correctly determined in 83.0% of cases, with the most common errors arising from the mislabeling of morphologically similar teeth. The underlying algorithm demonstrated high sensitivity (97.1%) and precision (98.1%) in tooth detection, with an F1 score of 97.6%. FDI position label accuracy was 94.7%. Ablation studies indicated that post-processing steps, such as duplicate filtering, significantly improved algorithm performance.</p><p><strong>Conclusion: </strong>Our findings show that automatic dentition analysis in PX images can be extended to include clinically relevant jaw classification, reducing the workload associated with manual labeling and classification.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1835-1843"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476436/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144486851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system. c臂成像系统在线标定中细长介入仪器的半自动分割。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-06-26 DOI: 10.1007/s11548-025-03434-w
Negar Chabi, Alfredo Illanes, Oliver Beuing, Daniel Behme, Bernhard Preim, Sylvia Saalfeld
{"title":"Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.","authors":"Negar Chabi, Alfredo Illanes, Oliver Beuing, Daniel Behme, Bernhard Preim, Sylvia Saalfeld","doi":"10.1007/s11548-025-03434-w","DOIUrl":"10.1007/s11548-025-03434-w","url":null,"abstract":"<p><strong>Purpose: </strong>The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry.</p><p><strong>Methods: </strong>We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term\"catheter\" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter.</p><p><strong>Results: </strong>Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance.</p><p><strong>Conclusions: </strong>This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1875-1888"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476443/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144499160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploratory analysis and framework for tissue classification based on vibroacoustic signals from needle-tissue interaction. 基于针-组织相互作用振动声信号的组织分类探索性分析与框架。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-08-12 DOI: 10.1007/s11548-025-03491-1
Katarzyna Heryan, Witold Serwatka, Dominik Rzepka, Patricio Fuentealba, Michael Friebe
{"title":"Exploratory analysis and framework for tissue classification based on vibroacoustic signals from needle-tissue interaction.","authors":"Katarzyna Heryan, Witold Serwatka, Dominik Rzepka, Patricio Fuentealba, Michael Friebe","doi":"10.1007/s11548-025-03491-1","DOIUrl":"10.1007/s11548-025-03491-1","url":null,"abstract":"<p><strong>Purpose: </strong>Numerous medical procedures, such as pharmaceutical fluid injections and biopsies, require the use of a surgical needle. During such procedures, the localization of the needle is of prime importance, both to ensure that no vital organs will be or have been damaged and to confirm that the target location has been reached. The guidance to a target and its localization is done using different imaging devices, such as MRI machines, CT scans, and US devices. All of them suffer from artifacts, making the accurate localization, especially the tip, of the needle difficult. This implies the necessity for a new needle guidance technique.</p><p><strong>Methods: </strong>The movement of a needle through human tissue produces vibroacoustic signals which may be leveraged to retrieve information on the needle's location using data processing and deep learning techniques. We have constructed a specialized phantom with animal tissue submerged in gelatine to gather the data needed to prove this hypothesis.</p><p><strong>Results and conclusion: </strong>This paper summarizes our initial experiments, in which we preprocessed the data, converted it into two different spectrogram representations (Mel and continuous wavelet transform spectrograms), and used them as input for two different deep learning models: NeedleNet and ResNet-34. The goal of this work was to chart out an optimal direction for further research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1795-1806"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144823134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Landmark-free automatic digital twin registration in robot-assisted partial nephrectomy using a generic end-to-end model. 机器人辅助部分肾切除术中使用通用端到端模型的无标记自动数字孪生配准。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-09-01 Epub Date: 2025-07-17 DOI: 10.1007/s11548-025-03473-3
Kilian Chandelon, Alice Pitout, Mathieu Souchaud, Julie Desternes, Gaëlle Margue, Julien Peyras, Nicolas Bourdel, Jean-Christophe Bernhard, Adrien Bartoli
{"title":"Landmark-free automatic digital twin registration in robot-assisted partial nephrectomy using a generic end-to-end model.","authors":"Kilian Chandelon, Alice Pitout, Mathieu Souchaud, Julie Desternes, Gaëlle Margue, Julien Peyras, Nicolas Bourdel, Jean-Christophe Bernhard, Adrien Bartoli","doi":"10.1007/s11548-025-03473-3","DOIUrl":"10.1007/s11548-025-03473-3","url":null,"abstract":"<p><strong>Purpose: </strong>Augmented Reality in Minimally Invasive Surgery has made tremendous progress in organs including the liver and the uterus. The core problem of Augmented Reality is registration, where a preoperative patient's geometric digital twin must be aligned with the image of the surgical camera. The case of the kidney is yet unresolved, owing to the absence of anatomical landmarks visible in both the patient's digital twin and the surgical images.</p><p><strong>Methods: </strong>We propose a landmark-free approach to registration, which is particularly well-adapted to the kidney. The approach involves a generic kidney model and an end-to-end neural network, which we train with a proposed dataset to regress the registration directly from a surgical RGB image.</p><p><strong>Results: </strong>Experimental evaluation across four clinical cases demonstrates strong concordance with expert-labelled registration, despite anatomical and motion variability. The proposed method achieved an average tumour contour alignment error of <math><mrow><mn>7.3</mn> <mo>±</mo> <mn>4.1</mn></mrow> </math> mm in <math><mrow><mn>9.4</mn> <mo>±</mo> <mn>0.2</mn></mrow> </math> ms.</p><p><strong>Conclusion: </strong>This landmark-free registration approach meets the accuracy, speed and resource constraints required in clinical practice, making it a promising tool for Augmented Reality-Assisted Partial Nephrectomy.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1931-1940"},"PeriodicalIF":2.3,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144661024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信