Sylvain Leclerc, Bianca Jansen Van Rensburg, Thibault De Villèle, Marie De Boutray, Nabil Zemiti, Noura Faraj
{"title":"Interactive pipeline for mandible reconstruction surgery planning using fibula free flap.","authors":"Sylvain Leclerc, Bianca Jansen Van Rensburg, Thibault De Villèle, Marie De Boutray, Nabil Zemiti, Noura Faraj","doi":"10.1007/s11548-025-03392-3","DOIUrl":"https://doi.org/10.1007/s11548-025-03392-3","url":null,"abstract":"<p><strong>Purpose: </strong>Mandible reconstruction surgery using fibula free flap is a long and expensive process requiring extensive surgical experience. Indeed, the planning stage, mandible shaping, and therefore osteotomy positioning on the fibula are tedious, often done by hand, and can take months. This delay is unacceptable when mandible deterioration is caused by a time-sensitive disease such as cancer. In this paper, we propose an interactive pipeline for an easy-to-use and time-efficient surgical planning tool tailored to be used directly by the surgeon.</p><p><strong>Methods: </strong>From CT scans of patient's mandible and fibula, we propose to register a cutting structure to the mandible and to segment and mesh the fibula; then, respecting anatomical constraints (mandible curvature, flap size, vessel preservation, etc.), we generate a surgery plan. Next, in a 3D interactive environment, the surgeon can intuitively shape the mandible by cutting, moving, and modifying bone fragments nondestructively. This stage allows surgeons to express their expertise, and the resulting cutting plane positions are then sent to a robot serving as a cutting guide for the surgery.</p><p><strong>Results: </strong>We demonstrate the efficiency of our method through patient-specific surgery planning for two different pathologic cases. We show our results are comparable to a commercial solution away from cutting guides design.</p><p><strong>Conclusion: </strong>Our proposed pipeline allows for a patient-specific precise planning and to cut down the preoperative planning phase of the mandible reconstruction surgery from days to minutes.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic trajectory planning for stereotactic radiofrequency ablation in non-discrete search space.","authors":"Adela Lukes, Reto Bale, Wolfgang Freysinger","doi":"10.1007/s11548-025-03386-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03386-1","url":null,"abstract":"<p><strong>Purpose: </strong>Radiofrequency ablation is a well established minimally invasive procedure to treat tumors in solid organs. During the procedure applicators are inserted into the tumor and cells around their tips are destroyed by heat-induced denaturation. Manual trajectory planning requires a trained interventionalist, and its complexity and planning time rise significantly with an increasing number of trajectories.</p><p><strong>Methods: </strong>We propose a trajectory planning method using a genetic algorithm to accelerate the planning process by automatically generating multiple safe plans. Our method uses a non-discrete search space to find the best entry and target points and does not need any prior calculation of such candidate's points sets. The method offers multiple plans, allowing the interventionalists to choose the most appropriate one. We tested on an open-source and in-house dataset, comparing with related work and retrospectively with the in-house clinical planning.</p><p><strong>Results: </strong>Our method, tested on 154 liver tumors across all segments using a 10 mm ablation radius, achieves a mean coverage of over 99% of the tumors including a 5 mm safety margin. The method provides safe trajectories for all solutions and is on average 4 <math><mo>×</mo></math> faster than related approaches.</p><p><strong>Conclusion: </strong>To the best of our knowledge, we are the first to propose a fast and accurate planning technique using multiple applicators with 10 mm ablation radius. Our algorithm can deliver solutions optimizing more than ten trajectories, approaching the clinical practice at our institution, where large tumors are treated with multiple overlapping ablation zones rather than resection.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144007343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jevan Arulampalam, Moritz F Ehlke, Christopher Plaskos, Qing Li, Catherine Z Stambouzou, James A Sullivan, Michael I Solomon, Eric M Slotkin
{"title":"Accuracy of a novel, automated 2D-3D registration software for measuring cup position in total hip arthroplasty.","authors":"Jevan Arulampalam, Moritz F Ehlke, Christopher Plaskos, Qing Li, Catherine Z Stambouzou, James A Sullivan, Michael I Solomon, Eric M Slotkin","doi":"10.1007/s11548-025-03389-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03389-y","url":null,"abstract":"<p><strong>Purpose: </strong>This study evaluated the accuracy of an automated 2D-3D registration software for matching preoperative 3D models of the pelvis and acetabular component to intraoperative 2D fluoroscopy images in total hip arthroplasty (THA).</p><p><strong>Methods: </strong>We developed a 2D-3D registration software that registers a 3D model of the pelvis from preoperative CT and a 3D model of the acetabular implant to intraoperative fluoroscopic imaging, thereby calculating the implant position relative to the 3D pelvic reference frame. A total of 145 datasets were used including 65 digitally reconstructed radiographs, 20 dry bone phantoms datasets and 60 clinical datasets with preoperative CT and intraoperative fluoroscopy imaging. Achieved acetabular positions from the clinical images were determined from post-operative CT using a 3D/3D registration method. Accuracy was assessed by comparing the calculated acetabular position from the 2D-3D software to the ground truth data.</p><p><strong>Results: </strong>Mean absolute difference between ground truth and the 2D-3D software was 1.9° [signed error range: -4.4, 4.8] for inclination, 1.5° [-7.3, 4.1] for anteversion, 1.6 mm [-5, 3.8] for cup height and 1.8 mm [-7.3, 4.1] for depth across all datasets. In total, 100% of inclination results and 98% of anteversion results were within 5° while 90% of height and 81% of depth results were within 3 mm.</p><p><strong>Conclusion: </strong>We validated the accuracy of an automated 2D-3D registration software for use in THA. While our method requires preoperative data from CT, the results are comparable to robotics and image-based navigation, and present a promising, simple technology that can be easily integrated into an operating room for THA.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144008373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Enhancing segmentation accuracy of the common iliac vein in OLIF51 surgery in intraoperative endoscopic video through gamma correction: a deep learning approach.","authors":"Kaori Yamamoto, Reoto Ueda, Kazuhide Inage, Yawara Eguchi, Miyako Narita, Yasuhiro Shiga, Masahiro Inoue, Noriyasu Toshi, Soichiro Tokeshi, Kohei Okuyama, Shuhei Ohyama, Satoshi Maki, Takeo Furuya, Seiji Ohtori, Sumihisa Orita","doi":"10.1007/s11548-025-03388-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03388-z","url":null,"abstract":"<p><strong>Purpose: </strong>The principal objective of this study was to develop and evaluate a deep learning model for segmenting the common iliac vein (CIV) from intraoperative endoscopic videos during oblique lateral interbody fusion for L5/S1 (OLIF51), a minimally invasive surgical procedure for degenerative lumbosacral spine diseases. The study aimed to address the challenge of intraoperative differentiation of the CIV from surrounding tissues to minimize the risk of vascular damage during the surgery.</p><p><strong>Methods: </strong>We employed two convolutional neural network (CNN) architectures: U-Net and U-Net++ with a ResNet18 backbone, for semantic segmentation. Gamma correction was applied during image preprocessing to improve luminance contrast between the CIV and adjacent tissues. We used a dataset of 614 endoscopic images from OLIF51 surgeries for model training, validation, and testing.</p><p><strong>Results: </strong>The U-Net++/ResNet18 model outperformed, achieving a Dice score of 0.70, indicating superior ability in delineating the position and shape of the CIV compared to the U-Net/ResNet18 model, which achieved a Dice score of 0.59. Gamma correction increased the differentiation between the CIV and the artery, improving the Dice score from 0.44 to 0.70.</p><p><strong>Conclusion: </strong>The findings demonstrate that deep learning models, especially the U-Net++ with ResNet18 enhanced by gamma correction preprocessing, can effectively segment the CIV in intraoperative videos. This approach has the potential to significantly improve intraoperative assistance and reduce the risk of vascular injury during OLIF51 procedures, despite the need for further research and refinement of the model for clinical application.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aure Enkaoua, João Ramalhinho, Mobarakol Islam, Hani J Marcus, Matthew J Clarkson
{"title":"Augmented reality system for endoscopic pituitary surgery with automatic registration.","authors":"Aure Enkaoua, João Ramalhinho, Mobarakol Islam, Hani J Marcus, Matthew J Clarkson","doi":"10.1007/s11548-025-03384-3","DOIUrl":"https://doi.org/10.1007/s11548-025-03384-3","url":null,"abstract":"<p><strong>Purpose: </strong>Endoscopic pituitary surgery is a minimally invasive technique to remove pituitary tumours through the nose. Currently, image guidance may be used in the form of a tracked pointer to help surgeons navigate the region and avoid damage to critical structures. However, the pointer method is mentally demanding as the pointer location is displayed in a different modality and disrupts the surgical workflow due to the setup time and frequent tool removal.</p><p><strong>Methods: </strong>We propose an Augmented Reality (AR) system where information from the pre-operative scan is displayed directly onto the endoscopic video. Our system features an on-board tracking system, allowing for the registration process to be performed automatically.</p><p><strong>Results: </strong>We evaluated the accuracy of our system and compared it to an AR system that uses an infrared (IR) camera to track an endoscope with reflective markers. Our system gave an accuracy of 1.1 (± 0.4) mm, compared to 2.4 (± 0.9) mm in the IR-tracked endoscope approach.</p><p><strong>Conclusion: </strong>Our Augmented Reality system is a more compact and transportable setup which outperformed the IR-tracked endoscope. The automatic registration method can save time in the operating room as well as increase AR overlay accuracy, improving the translation of these technologies.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144047532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wanwen Chen, Adam Schmidt, Eitan Prisman, Septimiu E Salcudean
{"title":"Improving neck ultrasound image retrieval using intra-sweep representation learning.","authors":"Wanwen Chen, Adam Schmidt, Eitan Prisman, Septimiu E Salcudean","doi":"10.1007/s11548-025-03394-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03394-1","url":null,"abstract":"<p><strong>Purpose: </strong>Intraoperative ultrasound (US) can enhance real-time visualization in transoral robotic surgery (TORS) and improve the safety of the surgery. To develop a US guidance system for TORS, US probe localization and US-preoperative image registration are essential. Image retrieval has the potential to solve these two problems in the same framework, and learning a discriminative US representation is key to successful image retrieval.</p><p><strong>Methods: </strong>We propose a self-supervised contrastive learning approach to match intraoperative US views to a preoperative image database. We introduce a novel contrastive learning strategy that leverages intra-sweep similarity and US probe location to improve feature encoding. Additionally, our model incorporates a flexible threshold to reject unsatisfactory matches.</p><p><strong>Results: </strong>Our method achieves 92.30% retrieval accuracy on simulated data and outperforms state-of-the-art temporal-based contrastive learning approaches. We also test our approach on real patient data with preoperative US-CT registration to show the feasibility of the proposed US probe localization system, despite tissue deformation due to tongue retraction.</p><p><strong>Conclusion: </strong>Our contrastive learning method, which utilizes intra-sweep similarity and US probe location, enhances US image representation learning. We also demonstrate the feasibility of using our image retrieval method to provide neck US localization on real patients US after tongue retraction. Total number of words: 2414 words.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144045190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improvements in dementia classification for brain SPECT volumes using vision transformer and the Brodmann areas.","authors":"Hirotaka Wakao, Tomomichi Iizuka, Akinobu Shimizu","doi":"10.1007/s11548-025-03365-6","DOIUrl":"https://doi.org/10.1007/s11548-025-03365-6","url":null,"abstract":"<p><strong>Purpose: </strong>This study proposes a vision transformer (ViT)-based model for dementia classification, able to classify representative dementia with Alzheimer's disease, dementia with Lewy bodies, frontotemporal dementia, and healthy controls using brain single-photon emission computed tomography (SPECT) images. The proposed method allows for an input based on the anatomical structure of the brain and the efficient use of five different SPECT images.</p><p><strong>Methods: </strong>The proposed model comprises a linear projection of input patches, eight transformer encoder layers, and a multilayered perceptron for classification with the following features: 1. diverse feature extraction with a multi-head structure for five different SPECT images; 2. Brodmann area-based input patch reflecting the anatomical structure of the brain; 3. cross-attention to fusion of diverse features.</p><p><strong>Results: </strong>The proposed method achieved a classification accuracy of 85.89% for 418 SPECT images from real clinical cases, significantly outperforming previous studies. Ablation studies were conducted to investigate the validity of each contribution, in which the consistency between the model's attention map and the physician's attention region was analyzed in detail.</p><p><strong>Conclusion: </strong>The proposed ViT-based model demonstrated superior dementia classification accuracy compared to previous methods, and is thus expected to contribute to early diagnosis and treatment of dementia using SPECT imaging. In the future, we aim to further improve the accuracy through the incorporation of patient clinical information.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marcel Knopp, Christoph J Bender, Niklas Holzwarth, Yi Li, Julius Kempf, Milenko Caranovic, Ferdinand Knieling, Werner Lang, Ulrich Rother, Alexander Seitel, Lena Maier-Hein, Kris K Dreher
{"title":"Shortcut learning leads to sex bias in deep learning models for photoacoustic tomography.","authors":"Marcel Knopp, Christoph J Bender, Niklas Holzwarth, Yi Li, Julius Kempf, Milenko Caranovic, Ferdinand Knieling, Werner Lang, Ulrich Rother, Alexander Seitel, Lena Maier-Hein, Kris K Dreher","doi":"10.1007/s11548-025-03370-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03370-9","url":null,"abstract":"<p><strong>Purpose: </strong>Shortcut learning has been identified as a source of algorithmic unfairness in medical imaging artificial intelligence (AI), but its impact on photoacoustic tomography (PAT), particularly concerning sex bias, remains underexplored. This study investigates this issue using peripheral artery disease (PAD) diagnosis as a specific clinical application.</p><p><strong>Methods: </strong>To examine the potential for sex bias due to shortcut learning in convolutional neural network (CNNs) and assess how such biases might affect diagnostic predictions, we created training and test datasets with varying PAD prevalence between sexes. Using these datasets, we explored (1) whether CNNs can classify the sex from imaging data, (2) how sex-specific prevalence shifts impact PAD diagnosis performance and underdiagnosis disparity between sexes, and (3) how similarly CNNs encode sex and PAD features.</p><p><strong>Results: </strong>Our study with 147 individuals demonstrates that CNNs can classify the sex from calf muscle PAT images, achieving an AUROC of 0.75. For PAD diagnosis, models trained on data with imbalanced sex-specific disease prevalence experienced significant performance drops (up to 0.21 AUROC) when applied to balanced test sets. Additionally, greater imbalances in sex-specific prevalence within the training data exacerbated underdiagnosis disparities between sexes. Finally, we identify evidence of shortcut learning by demonstrating the effective reuse of learned feature representations between PAD diagnosis and sex classification tasks.</p><p><strong>Conclusion: </strong>CNN-based models trained on PAT data may engage in shortcut learning by leveraging sex-related features, leading to biased and unreliable diagnostic predictions. Addressing demographic-specific prevalence imbalances and preventing shortcut learning is critical for developing models in the medical field that are both accurate and equitable across diverse patient populations.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Boucher, Nicholas Tetlow, Annie Fung, Amy Dewar, Pietro Arina, Sven Kerneis, John Whittle, Evangelos B Mazomenos
{"title":"KEVS: enhancing segmentation of visceral adipose tissue in pre-cystectomy CT with Gaussian kernel density estimation.","authors":"Thomas Boucher, Nicholas Tetlow, Annie Fung, Amy Dewar, Pietro Arina, Sven Kerneis, John Whittle, Evangelos B Mazomenos","doi":"10.1007/s11548-025-03380-7","DOIUrl":"https://doi.org/10.1007/s11548-025-03380-7","url":null,"abstract":"<p><strong>Purpose: </strong>The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of postoperative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations.</p><p><strong>Methods: </strong>We introduce the kernel density-enhanced VAT segmentator (KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks.</p><p><strong>Results: </strong>We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a <math><mrow><mn>4.80</mn> <mo>%</mo></mrow> </math> and <math><mrow><mn>6.02</mn> <mo>%</mo></mrow> </math> improvement in Dice coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst.</p><p><strong>Conclusion: </strong>This research introduces KEVS, an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144065177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edoardo De Rose, Ciro Benito Raggio, Ahmad Riccardo Rasheed, Pierangela Bruno, Paolo Zaffino, Salvatore De Rosa, Francesco Calimeri, Maria Francesca Spadea
{"title":"A multi-model deep learning approach for the identification of coronary artery calcifications within 2D coronary angiography images.","authors":"Edoardo De Rose, Ciro Benito Raggio, Ahmad Riccardo Rasheed, Pierangela Bruno, Paolo Zaffino, Salvatore De Rosa, Francesco Calimeri, Maria Francesca Spadea","doi":"10.1007/s11548-025-03382-5","DOIUrl":"https://doi.org/10.1007/s11548-025-03382-5","url":null,"abstract":"<p><strong>Purpose: </strong>Identifying and quantifying coronary artery calcification (CAC) is crucial for preoperative planning, as it helps to estimate both the complexity of the 2D coronary angiography (2DCA) procedure and the risk of developing intraoperative complications. Despite the relevance, the actual practice relies upon visual inspection of the 2DCA image frames by clinicians. This procedure is prone to inaccuracies due to the poor contrast and small size of the CAC; moreover, it is dependent on the physician's experience. To address this issue, we developed a workflow to assist clinicians in identifying CAC within 2DCA using data from 44 image acquisitions across 14 patients.</p><p><strong>Methods: </strong>Our workflow consists of three stages. In the first stage, a classification backbone based on ResNet-18 is applied to guide the CAC identification by extracting relevant features from 2DCA frames. In the second stage, a U-Net decoder architecture, mirroring the encoding structure of the ResNet-18, is employed to identify the regions of interest (ROI) of the CAC. Eventually, a post-processing step refines the results to obtain the final ROI. The workflow was evaluated using a leave-out cross-validation.</p><p><strong>Results: </strong>The proposed method outperformed the comparative methods by achieving an F1-score for the classification step of 0.87 (0.77 <math><mo>-</mo></math> 0.94) (median ± quartiles), while for the CAC identification step the intersection over minimum (IoM) was 0.64 (0.46 <math><mo>-</mo></math> 0.86) (median ± quartiles).</p><p><strong>Conclusion: </strong>This is the first attempt to propose a clinical decision support system to assist the identification of CAC within 2DCA. The proposed workflow holds the potential to improve both the accuracy and efficiency of CAC quantification, with promising clinical applications. As future work, the concurrent use of multiple auxiliary tasks could be explored to further improve the segmentation performance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}