International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Stable distance regression via spatial-frequency state space model for robot-assisted endomicroscopy. 基于空间-频率状态-空间模型的机器人内镜稳定距离回归。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-12 DOI: 10.1007/s11548-025-03353-w
Mengyi Zhou, Chi Xu, Stamatia Giannarou
{"title":"Stable distance regression via spatial-frequency state space model for robot-assisted endomicroscopy.","authors":"Mengyi Zhou, Chi Xu, Stamatia Giannarou","doi":"10.1007/s11548-025-03353-w","DOIUrl":"10.1007/s11548-025-03353-w","url":null,"abstract":"<p><strong>Purpose: </strong>Probe-based confocal laser endomicroscopy (pCLE) is a noninvasive technique that enables the direct visualization of tissue at a microscopic level in real time. One of the main challenges in using pCLE is maintaining the probe within a working range of micrometer scale. As a result, the need arises for automatically regressing the probe-tissue distance to enable precise robotic tissue scanning.</p><p><strong>Methods: </strong>In this paper, we propose the spatial frequency bidirectional structured state space model (SF-BiS4D) for pCLE probe-tissue distance regression. This model advances traditional state space models by processing image sequences bidirectionally and analyzing data in both the frequency and spatial domains. Additionally, we introduce a guided trajectory planning strategy that generates pseudo-distance labels, facilitating the training of sequential models to generate smooth and stable robotic scanning trajectories. To improve inference speed, we also implement a hierarchical guided fine-tuning (GF) approach that efficiently reduces the size of the BiS4D model while maintaining performance.</p><p><strong>Results: </strong>The performance of our proposed model has been evaluated both qualitatively and quantitatively using the pCLE regression dataset (PRD). In comparison with existing state-of-the-art (SOTA) methods, our approach demonstrated superior performance in terms of accuracy and stability.</p><p><strong>Conclusion: </strong>Our proposed deep learning-based framework effectively improves distance regression for microscopic visual servoing and demonstrates its potential for integration into surgical procedures requiring precise real-time intraoperative imaging.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1167-1174"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167353/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144063151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The RoDEM benchmark: evaluating the robustness of monocular single-shot depth estimation methods in minimally-invasive surgery. RoDEM基准:评估微创手术中单眼单镜头深度估计方法的鲁棒性。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-26 DOI: 10.1007/s11548-025-03375-4
Rasoul Sharifian, Navid Rabbani, Adrien Bartoli
{"title":"The RoDEM benchmark: evaluating the robustness of monocular single-shot depth estimation methods in minimally-invasive surgery.","authors":"Rasoul Sharifian, Navid Rabbani, Adrien Bartoli","doi":"10.1007/s11548-025-03375-4","DOIUrl":"10.1007/s11548-025-03375-4","url":null,"abstract":"<p><strong>Purpose: </strong>Monocular Single-shot Depth Estimation (MoSDE) methods for Minimally-Invasive Surgery (MIS) are promising but their robustness in surgical conditions remains questionable. We introduce the RoDEM benchmark, comprising an advanced analysis of perturbations, a dataset acquired in realistic MIS conditions and metrics. The dataset consists of 29,803 ex-vivo images including 44 video sequences with depth Ground-Truth covering clean conditions and nine perturbations. We give the performance evaluation of nine existing MoSDE methods.</p><p><strong>Methods: </strong>An RGB-D structured-light camera was firmly attached to a laparoscope. The two cameras were internally calibrated and the rigid transformation between them was estimated. Synchronised images and videos were captured while producing real perturbations in three settings. The depth maps were eventually transferred to the laparoscope viewpoint and the images categorised by perturbation severity.</p><p><strong>Results: </strong>The proposed metrics cover accuracy (clean condition performance) and robustness (resilience to perturbations). We found that foundation models demonstrated higher accuracy than the other methods. All methods were robust to motion blur and bright light. Methods trained on large datasets were robust against smoke, blood, and low light whereas the other methods exhibited reduced robustness. None of the methods coped with lens dirtiness and defocus blur.</p><p><strong>Conclusion: </strong>This study highlighted the importance of robustness evaluation in MoSDE as many existing methods showed reduced accuracy against common surgical perturbations. It emphasises the importance of training with large datasets including perturbations. The proposed benchmark gives a precise and detailed analysis of a method's performance in the MIS conditions. It will be made publicly available.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1215-1229"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating 3D pseudo-healthy knee MR images to support trochleoplasty planning. 生成三维假健康膝关节MR图像,支持滑囊成形术计划。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-03-26 DOI: 10.1007/s11548-025-03343-y
Michael Wehrli, Alicia Durrer, Paul Friedrich, Volodimir Buchakchiyskiy, Marcus Mumme, Edwin Li, Gyozo Lehoczky, Carol C Hasler, Philippe C Cattin
{"title":"Generating 3D pseudo-healthy knee MR images to support trochleoplasty planning.","authors":"Michael Wehrli, Alicia Durrer, Paul Friedrich, Volodimir Buchakchiyskiy, Marcus Mumme, Edwin Li, Gyozo Lehoczky, Carol C Hasler, Philippe C Cattin","doi":"10.1007/s11548-025-03343-y","DOIUrl":"10.1007/s11548-025-03343-y","url":null,"abstract":"<p><strong>Purpose: </strong>Trochlear dysplasia (TD) is a common malformation in adolescents, leading to anterior knee pain and instability. Surgical interventions such as trochleoplasty require precise planning to correct the trochlear groove. However, no standardized preoperative plan exists to guide surgeons in reshaping the femur. This study aims to generate patient-specific, pseudo-healthy MR images of the trochlear region that should theoretically align with the respective patient's patella, potentially supporting the preoperative planning of trochleoplasty.</p><p><strong>Methods: </strong>We employ a wavelet diffusion model (WDM) to generate personalized pseudo-healthy, anatomically plausible MR scans of the trochlear region. We train our model using knee MR scans of healthy subjects. During inference, we mask out pathological regions around the patella in scans of patients affected by TD and replace them with their pseudo-healthy counterpart. An orthopedic surgeon measured the sulcus angle (SA), trochlear groove depth (TGD) and Déjour classification in MR scans before and after inpainting. The code is available at https://github.com/wehrlimi/Generate-Pseudo-Healthy-Knee-MRI .</p><p><strong>Results: </strong>The inpainting by our model significantly improves the SA, TGD and Déjour classification in a study with 49 knee MR scans.</p><p><strong>Conclusion: </strong>This study demonstrates the potential of WDMs in providing surgeons with patient-specific guidance. By offering anatomically plausible MR scans, the method could potentially enhance the precision and preoperative planning of trochleoplasty and pave the way to more minimally invasive surgeries.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1059-1066"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167290/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143732835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Benchmarking reinforcement learning algorithms for autonomous mechanical thrombectomy. 自主机械取栓的标杆强化学习算法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-29 DOI: 10.1007/s11548-025-03360-x
Farhana Moosa, Harry Robertshaw, Lennart Karstensen, Thomas C Booth, Alejandro Granados
{"title":"Benchmarking reinforcement learning algorithms for autonomous mechanical thrombectomy.","authors":"Farhana Moosa, Harry Robertshaw, Lennart Karstensen, Thomas C Booth, Alejandro Granados","doi":"10.1007/s11548-025-03360-x","DOIUrl":"10.1007/s11548-025-03360-x","url":null,"abstract":"<p><strong>Purpose: </strong>Mechanical thrombectomy (MT) is the gold standard for treating acute ischemic stroke. However, challenges such as operator radiation exposure, reliance on operator experience, and limited treatment access remain. Although autonomous robotics could mitigate some of these limitations, current research lacks benchmarking of reinforcement learning (RL) algorithms for MT. This study aims to evaluate the performance of Deep Deterministic Policy Gradient, Twin Delayed Deep Deterministic Policy Gradient, Soft Actor-Critic, and Proximal Policy Optimization for MT.</p><p><strong>Methods: </strong>Simulated endovascular interventions based on the open-source stEVE platform were employed to train and evaluate RL algorithms. We simulated navigation of a guidewire from the descending aorta to the supra-aortic arteries, a key phase in MT. The impact of tuning hyperparameters, such as learning rate and network size, was explored. Optimized hyperparameters were used for assessment on an MT benchmark.</p><p><strong>Results: </strong>Before tuning, Deep Deterministic Policy Gradient had the highest success rate at 80% with a procedure time of 6.87 s when navigating to the supra-aortic arteries. After tuning, Proximal Policy Optimization achieved the highest success rate at 84% with a procedure time of 5.08 s. On the MT benchmark, Twin Delayed Deep Deterministic Policy Gradient recorded the highest success rate at 68% with a procedure time of 214.05 s.</p><p><strong>Conclusion: </strong>This work advances autonomous endovascular navigation by establishing a benchmark for MT. The results emphasize the importance of hyperparameter tuning on the performance of RL algorithms. Future research should extend this benchmark to identify the most effective RL algorithm.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1231-1238"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167280/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ConsisTNet: a spatio-temporal approach for consistent anatomical localization in endoscopic pituitary surgery. ConsisTNet:在垂体内窥镜手术中一致解剖定位的时空方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-29 DOI: 10.1007/s11548-025-03369-2
Zhehua Mao, Adrito Das, Danyal Z Khan, Simon C Williams, John G Hanrahan, Danail Stoyanov, Hani J Marcus, Sophia Bano
{"title":"ConsisTNet: a spatio-temporal approach for consistent anatomical localization in endoscopic pituitary surgery.","authors":"Zhehua Mao, Adrito Das, Danyal Z Khan, Simon C Williams, John G Hanrahan, Danail Stoyanov, Hani J Marcus, Sophia Bano","doi":"10.1007/s11548-025-03369-2","DOIUrl":"10.1007/s11548-025-03369-2","url":null,"abstract":"<p><strong>Purpose: </strong>Automated localization of critical anatomical structures in endoscopic pituitary surgery is crucial for enhancing patient safety and surgical outcomes. While deep learning models have shown promise in this task, their predictions often suffer from frame-to-frame inconsistency. This study addresses this issue by proposing ConsisTNet, a novel spatio-temporal model designed to improve prediction stability.</p><p><strong>Methods: </strong>ConsisTNet leverages spatio-temporal features extracted from consecutive frames to provide both temporally and spatially consistent predictions, addressing the limitations of single-frame approaches. We employ a semi-supervised strategy, utilizing ground-truth label tracking for pseudo-label generation through label propagation. Consistency is assessed by comparing predictions across consecutive frames using predicted label tracking. The model is optimized and accelerated using TensorRT for real-time intraoperative guidance.</p><p><strong>Results: </strong>Compared to previous state-of-the-art models, ConsisTNet significantly improves prediction consistency across video frames while maintaining high accuracy in segmentation and landmark detection. Specifically, segmentation consistency is improved by 4.56 and 9.45% in IoU for the two segmentation regions, and landmark detection consistency is enhanced with a 43.86% reduction in mean distance error. The accelerated model achieves an inference speed of 202 frames per second (FPS) with 16-bit floating point (FP16) precision, enabling real-time intraoperative guidance.</p><p><strong>Conclusion: </strong>ConsisTNet demonstrates significant improvements in spatio-temporal consistency of anatomical localization during endoscopic pituitary surgery, providing more stable and reliable real-time surgical assistance.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1239-1248"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144051526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Touching the tumor boundary: a pilot study on ultrasound-based virtual fixtures for breast-conserving surgery. 触摸肿瘤边界:保乳手术中基于超声的虚拟固定装置的初步研究。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-05 DOI: 10.1007/s11548-025-03342-z
Laura Connolly, Tamas Ungi, Adnan Munawar, Anton Deguet, Chris Yeung, Russell H Taylor, Parvin Mousavi, Gabor Fichtinger, Keyvan Hashtrudi-Zaad
{"title":"Touching the tumor boundary: a pilot study on ultrasound-based virtual fixtures for breast-conserving surgery.","authors":"Laura Connolly, Tamas Ungi, Adnan Munawar, Anton Deguet, Chris Yeung, Russell H Taylor, Parvin Mousavi, Gabor Fichtinger, Keyvan Hashtrudi-Zaad","doi":"10.1007/s11548-025-03342-z","DOIUrl":"10.1007/s11548-025-03342-z","url":null,"abstract":"<p><strong>Purpose: </strong>Delineating tumor boundaries during breast-conserving surgery is challenging as tumors are often highly mobile, non-palpable, and have irregularly shaped borders. To address these challenges, we introduce a cooperative robotic guidance system that applies haptic feedback for tumor localization. In this pilot study, we aim to assess if and how this system can be successfully integrated into breast cancer care.</p><p><strong>Methods: </strong>A small haptic robot is retrofitted with an electrocautery blade to operate as a cooperatively controlled surgical tool. Ultrasound and electromagnetic navigation are used to identify the tumor boundaries and position. A forbidden region virtual fixture is imposed when the surgical tool collides with the tumor boundary. We conducted a study where users were asked to resect tumors from breast simulants both with and without the haptic guidance. We then assess the results of these simulated resections both qualitatively and quantitatively.</p><p><strong>Results: </strong>Virtual fixture guidance is shown to improve resection margins. On average, users find the task to be less mentally demanding, frustrating, and effort intensive when haptic feedback is available. We also discovered some unanticipated impacts on surgical workflow that will guide design adjustments and training protocol moving forward.</p><p><strong>Conclusion: </strong>Our results suggest that virtual fixtures can help localize tumor boundaries in simulated breast-conserving surgery. Future work will include an extensive user study to further validate these results and fine-tune our guidance system.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1105-1113"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning for safe autonomous two-device navigation of cerebral vessels in mechanical thrombectomy. 机械取栓术中安全自主双装置脑血管导航的强化学习。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-03 DOI: 10.1007/s11548-025-03339-8
Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth
{"title":"Reinforcement learning for safe autonomous two-device navigation of cerebral vessels in mechanical thrombectomy.","authors":"Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth","doi":"10.1007/s11548-025-03339-8","DOIUrl":"10.1007/s11548-025-03339-8","url":null,"abstract":"<p><strong>Purpose: </strong>Autonomous systems in mechanical thrombectomy (MT) hold promise for reducing procedure times, minimizing radiation exposure, and enhancing patient safety. However, current reinforcement learning (RL) methods only reach the carotid arteries, are not generalizable to other patient vasculatures, and do not consider safety. We propose a safe dual-device RL algorithm that can navigate beyond the carotid arteries to cerebral vessels.</p><p><strong>Methods: </strong>We used the Simulation Open Framework Architecture to represent the intricacies of cerebral vessels, and a modified Soft Actor-Critic RL algorithm to learn, for the first time, the navigation of micro-catheters and micro-guidewires. We incorporate patient safety metrics into our reward function by integrating guidewire tip forces. Inverse RL is used with demonstrator data on 12 patient-specific vascular cases.</p><p><strong>Results: </strong>Our simulation demonstrates successful autonomous navigation within unseen cerebral vessels, achieving a 96% success rate, 7.0 s procedure time, and 0.24 N mean forces, well below the proposed 1.5 N vessel rupture threshold.</p><p><strong>Conclusion: </strong>To the best of our knowledge, our proposed autonomous system for MT two-device navigation reaches cerebral vessels, considers safety, and is generalizable to unseen patient-specific cases for the first time. We envisage future work will extend the validation to vasculatures of different complexity and on in vitro models. While our contributions pave the way toward deploying agents in clinical settings, safety and trustworthiness will be crucial elements to consider when proposing new methodology.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1077-1086"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167253/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SwinCVS: a unified approach to classifying critical view of safety structures in laparoscopic cholecystectomy. SwinCVS:一种统一的方法对腹腔镜胆囊切除术中安全结构的关键视图进行分类。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-11 DOI: 10.1007/s11548-025-03354-9
Franciszek M Nowak, Evangelos B Mazomenos, Brian Davidson, Matthew J Clarkson
{"title":"SwinCVS: a unified approach to classifying critical view of safety structures in laparoscopic cholecystectomy.","authors":"Franciszek M Nowak, Evangelos B Mazomenos, Brian Davidson, Matthew J Clarkson","doi":"10.1007/s11548-025-03354-9","DOIUrl":"10.1007/s11548-025-03354-9","url":null,"abstract":"<p><strong>Purpose: </strong>Laparoscopic cholecystectomy is one of the most commonly performed surgeries in the UK. Despite its safety, the volume of operations leads to a notable number of complications, with surgical errors often mitigated by the critical view of safety (CVS) technique. However, reliably achieving CVS intraoperatively can be challenging. Current state-of-the-art models for automated CVS evaluation rely on complex, multistage training and semantic segmentation masks, restricting their adaptability and limiting further performance improvements.</p><p><strong>Methods: </strong>We propose SwinCVS, a spatiotemporal architecture designed for end-to-end training. SwinCVS combines the SwinV2 image encoder with an LSTM for robust CVS classification. We evaluated three different backbones-SwinV2, VMamba, and ResNet50-to assess their ability to encode surgical images. SwinCVS model was evaluated with the end-to-end variant, and with the pretrained variant with performance statistically compared with the current state-of-the-art, SV2LSTG on Endoscapes dataset.</p><p><strong>Results: </strong>SwinV2 demonstrated as the best encoder achieving +2.07% and +17.72% mAP over VMamba and ResNet50, respectively. SwinCVS trained end-to-end achieves 64.59% mAP and performs on par with SV2LSTG (64.68% mAP, p=0.470), while its pretrained variant achieves 67.45% mAP showing a significant improvement over the current SOTA.</p><p><strong>Conclusion: </strong>Our proposed solution offers a promising approach for CVS classification, outperforming existing methods and eliminating the need for semantic segmentation masks. Its design supports robust feature extraction and allows for future enhancements through additional tasks that force clinically relevant priors. The results highlight that attention-based architectures like SwinV2 are well suited for surgical image encoding, offering a practical approach for improving automated systems in laparoscopic surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1145-1152"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167293/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143991413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal imaging platform for enhanced tumor resection in neurosurgery: integrating hyperspectral and pCLE technologies. 神经外科肿瘤强化切除的多模态成像平台:整合高光谱和pCLE技术。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-03 DOI: 10.1007/s11548-025-03340-1
Alfie Roddan, Tobias Czempiel, Chi Xu, Haozheng Xu, Alistair Weld, Vadzim Chalau, Giulio Anichini, Daniel S Elson, Stamatia Giannarou
{"title":"Multimodal imaging platform for enhanced tumor resection in neurosurgery: integrating hyperspectral and pCLE technologies.","authors":"Alfie Roddan, Tobias Czempiel, Chi Xu, Haozheng Xu, Alistair Weld, Vadzim Chalau, Giulio Anichini, Daniel S Elson, Stamatia Giannarou","doi":"10.1007/s11548-025-03340-1","DOIUrl":"10.1007/s11548-025-03340-1","url":null,"abstract":"<p><strong>Purpose: </strong>This work presents a novel multimodal imaging platform that integrates hyperspectral imaging (HSI) and probe-based confocal laser endomicroscopy (pCLE) for improved brain tumor identification during neurosurgery. By combining these two modalities, we aim to enhance surgical navigation, addressing the limitations of using each modality when used independently.</p><p><strong>Methods: </strong>We developed a multimodal imaging platform that integrates HSI and pCLE within an operating microscope setup using computer vision techniques. The system combines real-time, high-resolution HSI for macroscopic tissue analysis with pCLE for cellular-level imaging. The predictions of each modality made using Machine Learning methods are combined to improve tumor identification.</p><p><strong>Results: </strong>Our evaluation of the multimodal system revealed low spatial error, with minimal reprojection discrepancies, ensuring precise alignment between the HSI and pCLE. This combined imaging approach together with our multimodal tissue characterization algorithm significantly improves tumor identification, yielding higher Dice and Recall scores compared to using HSI or pCLE individually.</p><p><strong>Conclusion: </strong>Our multimodal imaging platform represents a crucial first step toward enhancing tumor identification by combining HSI and pCLE modalities for the first time. We highlight improvements in metrics such as the Dice score and Recall, underscoring the potential for further advancements in this area.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1087-1096"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167335/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
perfDSA: Automatic Perfusion Imaging in Cerebral Digital Subtraction Angiography. 脑数字减影血管造影中的自动灌注成像。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-06-01 Epub Date: 2025-04-24 DOI: 10.1007/s11548-025-03359-4
Ruisheng Su, P Matthijs van der Sluijs, Flavius-Gabriel Marc, Frank Te Nijenhuis, Sandra A P Cornelissen, Bob Roozenbeek, Wim H van Zwam, Aad van der Lugt, Danny Ruijters, Josien Pluim, Theo van Walsum
{"title":"perfDSA: Automatic Perfusion Imaging in Cerebral Digital Subtraction Angiography.","authors":"Ruisheng Su, P Matthijs van der Sluijs, Flavius-Gabriel Marc, Frank Te Nijenhuis, Sandra A P Cornelissen, Bob Roozenbeek, Wim H van Zwam, Aad van der Lugt, Danny Ruijters, Josien Pluim, Theo van Walsum","doi":"10.1007/s11548-025-03359-4","DOIUrl":"10.1007/s11548-025-03359-4","url":null,"abstract":"<p><strong>Purpose: </strong>Cerebral digital subtraction angiography (DSA) is a standard imaging technique in image-guided interventions for visualizing cerebral blood flow and therapeutic guidance thanks to its high spatio-temporal resolution. To date, cerebral perfusion characteristics in DSA are primarily assessed visually by interventionists, which is time-consuming, error-prone, and subjective. To facilitate fast and reproducible assessment of cerebral perfusion, this work aims to develop and validate a fully automatic and quantitative framework for perfusion DSA.</p><p><strong>Methods: </strong>We put forward a framework, perfDSA, that automatically generates deconvolution-based perfusion parametric images from cerebral DSA. It automatically extracts the arterial input function from the supraclinoid internal carotid artery (ICA) and computes deconvolution-based perfusion parametric images including cerebral blood volume (CBV), cerebral blood flow (CBF), mean transit time (MTT), and Tmax.</p><p><strong>Results: </strong>On a DSA dataset with 1006 patients from the multicenter MR CLEAN registry, the proposed perfDSA achieves a Dice of 0.73(±0.21) in segmenting the supraclinoid ICA, resulting in high accuracy of arterial input function (AIF) curves similar to manual extraction. Moreover, some extracted perfusion images show statistically significant associations (P=2.62e <math><mo>-</mo></math> 5) with favorable functional outcomes in stroke patients.</p><p><strong>Conclusion: </strong>The proposed perfDSA framework promises to aid therapeutic decision-making in cerebrovascular interventions and facilitate discoveries of novel quantitative biomarkers in clinical practice. The code is available at https://github.com/RuishengSu/perfDSA .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"1195-1203"},"PeriodicalIF":2.3,"publicationDate":"2025-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167352/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143991220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信