{"title":"Stable distance regression via spatial-frequency state space model for robot-assisted endomicroscopy.","authors":"Mengyi Zhou, Chi Xu, Stamatia Giannarou","doi":"10.1007/s11548-025-03353-w","DOIUrl":"10.1007/s11548-025-03353-w","url":null,"abstract":"<p><strong>Purpose: </strong>Probe-based confocal laser endomicroscopy (pCLE) is a noninvasive technique that enables the direct visualization of tissue at a microscopic level in real time. One of the main challenges in using pCLE is maintaining the probe within a working range of micrometer scale. As a result, the need arises for automatically regressing the probe-tissue distance to enable precise robotic tissue scanning.</p><p><strong>Methods: </strong>In this paper, we propose the spatial frequency bidirectional structured state space model (SF-BiS4D) for pCLE probe-tissue distance regression. This model advances traditional state space models by processing image sequences bidirectionally and analyzing data in both the frequency and spatial domains. Additionally, we introduce a guided trajectory planning strategy that generates pseudo-distance labels, facilitating the training of sequential models to generate smooth and stable robotic scanning trajectories. To improve inference speed, we also implement a hierarchical guided fine-tuning (GF) approach that efficiently reduces the size of the BiS4D model while maintaining performance.</p><p><strong>Results: </strong>The performance of our proposed model has been evaluated both qualitatively and quantitatively using the pCLE regression dataset (PRD). In comparison with existing state-of-the-art (SOTA) methods, our approach demonstrated superior performance in terms of accuracy and stability.</p><p><strong>Conclusion: </strong>Our proposed deep learning-based framework effectively improves distance regression for microscopic visual servoing and demonstrates its potential for integration into surgical procedures requiring precise real-time intraoperative imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1167-1174"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167353/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144063151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The RoDEM benchmark: evaluating the robustness of monocular single-shot depth estimation methods in minimally-invasive surgery.","authors":"Rasoul Sharifian, Navid Rabbani, Adrien Bartoli","doi":"10.1007/s11548-025-03375-4","DOIUrl":"10.1007/s11548-025-03375-4","url":null,"abstract":"<p><strong>Purpose: </strong>Monocular Single-shot Depth Estimation (MoSDE) methods for Minimally-Invasive Surgery (MIS) are promising but their robustness in surgical conditions remains questionable. We introduce the RoDEM benchmark, comprising an advanced analysis of perturbations, a dataset acquired in realistic MIS conditions and metrics. The dataset consists of 29,803 ex-vivo images including 44 video sequences with depth Ground-Truth covering clean conditions and nine perturbations. We give the performance evaluation of nine existing MoSDE methods.</p><p><strong>Methods: </strong>An RGB-D structured-light camera was firmly attached to a laparoscope. The two cameras were internally calibrated and the rigid transformation between them was estimated. Synchronised images and videos were captured while producing real perturbations in three settings. The depth maps were eventually transferred to the laparoscope viewpoint and the images categorised by perturbation severity.</p><p><strong>Results: </strong>The proposed metrics cover accuracy (clean condition performance) and robustness (resilience to perturbations). We found that foundation models demonstrated higher accuracy than the other methods. All methods were robust to motion blur and bright light. Methods trained on large datasets were robust against smoke, blood, and low light whereas the other methods exhibited reduced robustness. None of the methods coped with lens dirtiness and defocus blur.</p><p><strong>Conclusion: </strong>This study highlighted the importance of robustness evaluation in MoSDE as many existing methods showed reduced accuracy against common surgical perturbations. It emphasises the importance of training with large datasets including perturbations. The proposed benchmark gives a precise and detailed analysis of a method's performance in the MIS conditions. It will be made publicly available.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1215-1229"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Michael Wehrli, Alicia Durrer, Paul Friedrich, Volodimir Buchakchiyskiy, Marcus Mumme, Edwin Li, Gyozo Lehoczky, Carol C Hasler, Philippe C Cattin
{"title":"Generating 3D pseudo-healthy knee MR images to support trochleoplasty planning.","authors":"Michael Wehrli, Alicia Durrer, Paul Friedrich, Volodimir Buchakchiyskiy, Marcus Mumme, Edwin Li, Gyozo Lehoczky, Carol C Hasler, Philippe C Cattin","doi":"10.1007/s11548-025-03343-y","DOIUrl":"10.1007/s11548-025-03343-y","url":null,"abstract":"<p><strong>Purpose: </strong>Trochlear dysplasia (TD) is a common malformation in adolescents, leading to anterior knee pain and instability. Surgical interventions such as trochleoplasty require precise planning to correct the trochlear groove. However, no standardized preoperative plan exists to guide surgeons in reshaping the femur. This study aims to generate patient-specific, pseudo-healthy MR images of the trochlear region that should theoretically align with the respective patient's patella, potentially supporting the preoperative planning of trochleoplasty.</p><p><strong>Methods: </strong>We employ a wavelet diffusion model (WDM) to generate personalized pseudo-healthy, anatomically plausible MR scans of the trochlear region. We train our model using knee MR scans of healthy subjects. During inference, we mask out pathological regions around the patella in scans of patients affected by TD and replace them with their pseudo-healthy counterpart. An orthopedic surgeon measured the sulcus angle (SA), trochlear groove depth (TGD) and Déjour classification in MR scans before and after inpainting. The code is available at https://github.com/wehrlimi/Generate-Pseudo-Healthy-Knee-MRI .</p><p><strong>Results: </strong>The inpainting by our model significantly improves the SA, TGD and Déjour classification in a study with 49 knee MR scans.</p><p><strong>Conclusion: </strong>This study demonstrates the potential of WDMs in providing surgeons with patient-specific guidance. By offering anatomically plausible MR scans, the method could potentially enhance the precision and preoperative planning of trochleoplasty and pave the way to more minimally invasive surgeries.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1059-1066"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167290/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farhana Moosa, Harry Robertshaw, Lennart Karstensen, Thomas C Booth, Alejandro Granados
{"title":"Benchmarking reinforcement learning algorithms for autonomous mechanical thrombectomy.","authors":"Farhana Moosa, Harry Robertshaw, Lennart Karstensen, Thomas C Booth, Alejandro Granados","doi":"10.1007/s11548-025-03360-x","DOIUrl":"10.1007/s11548-025-03360-x","url":null,"abstract":"<p><strong>Purpose: </strong>Mechanical thrombectomy (MT) is the gold standard for treating acute ischemic stroke. However, challenges such as operator radiation exposure, reliance on operator experience, and limited treatment access remain. Although autonomous robotics could mitigate some of these limitations, current research lacks benchmarking of reinforcement learning (RL) algorithms for MT. This study aims to evaluate the performance of Deep Deterministic Policy Gradient, Twin Delayed Deep Deterministic Policy Gradient, Soft Actor-Critic, and Proximal Policy Optimization for MT.</p><p><strong>Methods: </strong>Simulated endovascular interventions based on the open-source stEVE platform were employed to train and evaluate RL algorithms. We simulated navigation of a guidewire from the descending aorta to the supra-aortic arteries, a key phase in MT. The impact of tuning hyperparameters, such as learning rate and network size, was explored. Optimized hyperparameters were used for assessment on an MT benchmark.</p><p><strong>Results: </strong>Before tuning, Deep Deterministic Policy Gradient had the highest success rate at 80% with a procedure time of 6.87 s when navigating to the supra-aortic arteries. After tuning, Proximal Policy Optimization achieved the highest success rate at 84% with a procedure time of 5.08 s. On the MT benchmark, Twin Delayed Deep Deterministic Policy Gradient recorded the highest success rate at 68% with a procedure time of 214.05 s.</p><p><strong>Conclusion: </strong>This work advances autonomous endovascular navigation by establishing a benchmark for MT. The results emphasize the importance of hyperparameter tuning on the performance of RL algorithms. Future research should extend this benchmark to identify the most effective RL algorithm.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1231-1238"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167280/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhehua Mao, Adrito Das, Danyal Z Khan, Simon C Williams, John G Hanrahan, Danail Stoyanov, Hani J Marcus, Sophia Bano
{"title":"ConsisTNet: a spatio-temporal approach for consistent anatomical localization in endoscopic pituitary surgery.","authors":"Zhehua Mao, Adrito Das, Danyal Z Khan, Simon C Williams, John G Hanrahan, Danail Stoyanov, Hani J Marcus, Sophia Bano","doi":"10.1007/s11548-025-03369-2","DOIUrl":"10.1007/s11548-025-03369-2","url":null,"abstract":"<p><strong>Purpose: </strong>Automated localization of critical anatomical structures in endoscopic pituitary surgery is crucial for enhancing patient safety and surgical outcomes. While deep learning models have shown promise in this task, their predictions often suffer from frame-to-frame inconsistency. This study addresses this issue by proposing ConsisTNet, a novel spatio-temporal model designed to improve prediction stability.</p><p><strong>Methods: </strong>ConsisTNet leverages spatio-temporal features extracted from consecutive frames to provide both temporally and spatially consistent predictions, addressing the limitations of single-frame approaches. We employ a semi-supervised strategy, utilizing ground-truth label tracking for pseudo-label generation through label propagation. Consistency is assessed by comparing predictions across consecutive frames using predicted label tracking. The model is optimized and accelerated using TensorRT for real-time intraoperative guidance.</p><p><strong>Results: </strong>Compared to previous state-of-the-art models, ConsisTNet significantly improves prediction consistency across video frames while maintaining high accuracy in segmentation and landmark detection. Specifically, segmentation consistency is improved by 4.56 and 9.45% in IoU for the two segmentation regions, and landmark detection consistency is enhanced with a 43.86% reduction in mean distance error. The accelerated model achieves an inference speed of 202 frames per second (FPS) with 16-bit floating point (FP16) precision, enabling real-time intraoperative guidance.</p><p><strong>Conclusion: </strong>ConsisTNet demonstrates significant improvements in spatio-temporal consistency of anatomical localization during endoscopic pituitary surgery, providing more stable and reliable real-time surgical assistance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1239-1248"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura Connolly, Tamas Ungi, Adnan Munawar, Anton Deguet, Chris Yeung, Russell H Taylor, Parvin Mousavi, Gabor Fichtinger, Keyvan Hashtrudi-Zaad
{"title":"Touching the tumor boundary: a pilot study on ultrasound-based virtual fixtures for breast-conserving surgery.","authors":"Laura Connolly, Tamas Ungi, Adnan Munawar, Anton Deguet, Chris Yeung, Russell H Taylor, Parvin Mousavi, Gabor Fichtinger, Keyvan Hashtrudi-Zaad","doi":"10.1007/s11548-025-03342-z","DOIUrl":"10.1007/s11548-025-03342-z","url":null,"abstract":"<p><strong>Purpose: </strong>Delineating tumor boundaries during breast-conserving surgery is challenging as tumors are often highly mobile, non-palpable, and have irregularly shaped borders. To address these challenges, we introduce a cooperative robotic guidance system that applies haptic feedback for tumor localization. In this pilot study, we aim to assess if and how this system can be successfully integrated into breast cancer care.</p><p><strong>Methods: </strong>A small haptic robot is retrofitted with an electrocautery blade to operate as a cooperatively controlled surgical tool. Ultrasound and electromagnetic navigation are used to identify the tumor boundaries and position. A forbidden region virtual fixture is imposed when the surgical tool collides with the tumor boundary. We conducted a study where users were asked to resect tumors from breast simulants both with and without the haptic guidance. We then assess the results of these simulated resections both qualitatively and quantitatively.</p><p><strong>Results: </strong>Virtual fixture guidance is shown to improve resection margins. On average, users find the task to be less mentally demanding, frustrating, and effort intensive when haptic feedback is available. We also discovered some unanticipated impacts on surgical workflow that will guide design adjustments and training protocol moving forward.</p><p><strong>Conclusion: </strong>Our results suggest that virtual fixtures can help localize tumor boundaries in simulated breast-conserving surgery. Future work will include an extensive user study to further validate these results and fine-tune our guidance system.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1105-1113"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth
{"title":"Reinforcement learning for safe autonomous two-device navigation of cerebral vessels in mechanical thrombectomy.","authors":"Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth","doi":"10.1007/s11548-025-03339-8","DOIUrl":"10.1007/s11548-025-03339-8","url":null,"abstract":"<p><strong>Purpose: </strong>Autonomous systems in mechanical thrombectomy (MT) hold promise for reducing procedure times, minimizing radiation exposure, and enhancing patient safety. However, current reinforcement learning (RL) methods only reach the carotid arteries, are not generalizable to other patient vasculatures, and do not consider safety. We propose a safe dual-device RL algorithm that can navigate beyond the carotid arteries to cerebral vessels.</p><p><strong>Methods: </strong>We used the Simulation Open Framework Architecture to represent the intricacies of cerebral vessels, and a modified Soft Actor-Critic RL algorithm to learn, for the first time, the navigation of micro-catheters and micro-guidewires. We incorporate patient safety metrics into our reward function by integrating guidewire tip forces. Inverse RL is used with demonstrator data on 12 patient-specific vascular cases.</p><p><strong>Results: </strong>Our simulation demonstrates successful autonomous navigation within unseen cerebral vessels, achieving a 96% success rate, 7.0 s procedure time, and 0.24 N mean forces, well below the proposed 1.5 N vessel rupture threshold.</p><p><strong>Conclusion: </strong>To the best of our knowledge, our proposed autonomous system for MT two-device navigation reaches cerebral vessels, considers safety, and is generalizable to unseen patient-specific cases for the first time. We envisage future work will extend the validation to vasculatures of different complexity and on in vitro models. While our contributions pave the way toward deploying agents in clinical settings, safety and trustworthiness will be crucial elements to consider when proposing new methodology.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1077-1086"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Franciszek M Nowak, Evangelos B Mazomenos, Brian Davidson, Matthew J Clarkson
{"title":"SwinCVS: a unified approach to classifying critical view of safety structures in laparoscopic cholecystectomy.","authors":"Franciszek M Nowak, Evangelos B Mazomenos, Brian Davidson, Matthew J Clarkson","doi":"10.1007/s11548-025-03354-9","DOIUrl":"10.1007/s11548-025-03354-9","url":null,"abstract":"<p><strong>Purpose: </strong>Laparoscopic cholecystectomy is one of the most commonly performed surgeries in the UK. Despite its safety, the volume of operations leads to a notable number of complications, with surgical errors often mitigated by the critical view of safety (CVS) technique. However, reliably achieving CVS intraoperatively can be challenging. Current state-of-the-art models for automated CVS evaluation rely on complex, multistage training and semantic segmentation masks, restricting their adaptability and limiting further performance improvements.</p><p><strong>Methods: </strong>We propose SwinCVS, a spatiotemporal architecture designed for end-to-end training. SwinCVS combines the SwinV2 image encoder with an LSTM for robust CVS classification. We evaluated three different backbones-SwinV2, VMamba, and ResNet50-to assess their ability to encode surgical images. SwinCVS model was evaluated with the end-to-end variant, and with the pretrained variant with performance statistically compared with the current state-of-the-art, SV2LSTG on Endoscapes dataset.</p><p><strong>Results: </strong>SwinV2 demonstrated as the best encoder achieving +2.07% and +17.72% mAP over VMamba and ResNet50, respectively. SwinCVS trained end-to-end achieves 64.59% mAP and performs on par with SV2LSTG (64.68% mAP, p=0.470), while its pretrained variant achieves 67.45% mAP showing a significant improvement over the current SOTA.</p><p><strong>Conclusion: </strong>Our proposed solution offers a promising approach for CVS classification, outperforming existing methods and eliminating the need for semantic segmentation masks. Its design supports robust feature extraction and allows for future enhancements through additional tasks that force clinically relevant priors. The results highlight that attention-based architectures like SwinV2 are well suited for surgical image encoding, offering a practical approach for improving automated systems in laparoscopic surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1145-1152"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143991413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alfie Roddan, Tobias Czempiel, Chi Xu, Haozheng Xu, Alistair Weld, Vadzim Chalau, Giulio Anichini, Daniel S Elson, Stamatia Giannarou
{"title":"Multimodal imaging platform for enhanced tumor resection in neurosurgery: integrating hyperspectral and pCLE technologies.","authors":"Alfie Roddan, Tobias Czempiel, Chi Xu, Haozheng Xu, Alistair Weld, Vadzim Chalau, Giulio Anichini, Daniel S Elson, Stamatia Giannarou","doi":"10.1007/s11548-025-03340-1","DOIUrl":"10.1007/s11548-025-03340-1","url":null,"abstract":"<p><strong>Purpose: </strong>This work presents a novel multimodal imaging platform that integrates hyperspectral imaging (HSI) and probe-based confocal laser endomicroscopy (pCLE) for improved brain tumor identification during neurosurgery. By combining these two modalities, we aim to enhance surgical navigation, addressing the limitations of using each modality when used independently.</p><p><strong>Methods: </strong>We developed a multimodal imaging platform that integrates HSI and pCLE within an operating microscope setup using computer vision techniques. The system combines real-time, high-resolution HSI for macroscopic tissue analysis with pCLE for cellular-level imaging. The predictions of each modality made using Machine Learning methods are combined to improve tumor identification.</p><p><strong>Results: </strong>Our evaluation of the multimodal system revealed low spatial error, with minimal reprojection discrepancies, ensuring precise alignment between the HSI and pCLE. This combined imaging approach together with our multimodal tissue characterization algorithm significantly improves tumor identification, yielding higher Dice and Recall scores compared to using HSI or pCLE individually.</p><p><strong>Conclusion: </strong>Our multimodal imaging platform represents a crucial first step toward enhancing tumor identification by combining HSI and pCLE modalities for the first time. We highlight improvements in metrics such as the Dice score and Recall, underscoring the potential for further advancements in this area.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1087-1096"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruisheng Su, P Matthijs van der Sluijs, Flavius-Gabriel Marc, Frank Te Nijenhuis, Sandra A P Cornelissen, Bob Roozenbeek, Wim H van Zwam, Aad van der Lugt, Danny Ruijters, Josien Pluim, Theo van Walsum
{"title":"perfDSA: Automatic Perfusion Imaging in Cerebral Digital Subtraction Angiography.","authors":"Ruisheng Su, P Matthijs van der Sluijs, Flavius-Gabriel Marc, Frank Te Nijenhuis, Sandra A P Cornelissen, Bob Roozenbeek, Wim H van Zwam, Aad van der Lugt, Danny Ruijters, Josien Pluim, Theo van Walsum","doi":"10.1007/s11548-025-03359-4","DOIUrl":"10.1007/s11548-025-03359-4","url":null,"abstract":"<p><strong>Purpose: </strong>Cerebral digital subtraction angiography (DSA) is a standard imaging technique in image-guided interventions for visualizing cerebral blood flow and therapeutic guidance thanks to its high spatio-temporal resolution. To date, cerebral perfusion characteristics in DSA are primarily assessed visually by interventionists, which is time-consuming, error-prone, and subjective. To facilitate fast and reproducible assessment of cerebral perfusion, this work aims to develop and validate a fully automatic and quantitative framework for perfusion DSA.</p><p><strong>Methods: </strong>We put forward a framework, perfDSA, that automatically generates deconvolution-based perfusion parametric images from cerebral DSA. It automatically extracts the arterial input function from the supraclinoid internal carotid artery (ICA) and computes deconvolution-based perfusion parametric images including cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and Tmax.</p><p><strong>Results: </strong>On a DSA dataset with 1006 patients from the multicenter MR CLEAN registry, the proposed perfDSA achieves a Dice of 0.73(±0.21) in segmenting the supraclinoid ICA, resulting in high accuracy of arterial input function (AIF) curves similar to manual extraction. Moreover, some extracted perfusion images show statistically significant associations (P=2.62e <math><mo>-</mo></math> 5) with favorable functional outcomes in stroke patients.</p><p><strong>Conclusion: </strong>The proposed perfDSA framework promises to aid therapeutic decision-making in cerebrovascular interventions and facilitate discoveries of novel quantitative biomarkers in clinical practice. The code is available at https://github.com/RuishengSu/perfDSA .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1195-1203"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143991220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}