International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
SwinCVS: a unified approach to classifying critical view of safety structures in laparoscopic cholecystectomy. SwinCVS:一种统一的方法对腹腔镜胆囊切除术中安全结构的关键视图进行分类。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-11 DOI: 10.1007/s11548-025-03354-9
Franciszek M Nowak, Evangelos B Mazomenos, Brian Davidson, Matthew J Clarkson
{"title":"SwinCVS: a unified approach to classifying critical view of safety structures in laparoscopic cholecystectomy.","authors":"Franciszek M Nowak, Evangelos B Mazomenos, Brian Davidson, Matthew J Clarkson","doi":"10.1007/s11548-025-03354-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03354-9","url":null,"abstract":"<p><strong>Purpose: </strong>Laparoscopic cholecystectomy is one of the most commonly performed surgeries in the UK. Despite its safety, the volume of operations leads to a notable number of complications, with surgical errors often mitigated by the critical view of safety (CVS) technique. However, reliably achieving CVS intraoperatively can be challenging. Current state-of-the-art models for automated CVS evaluation rely on complex, multistage training and semantic segmentation masks, restricting their adaptability and limiting further performance improvements.</p><p><strong>Methods: </strong>We propose SwinCVS, a spatiotemporal architecture designed for end-to-end training. SwinCVS combines the SwinV2 image encoder with an LSTM for robust CVS classification. We evaluated three different backbones-SwinV2, VMamba, and ResNet50-to assess their ability to encode surgical images. SwinCVS model was evaluated with the end-to-end variant, and with the pretrained variant with performance statistically compared with the current state-of-the-art, SV2LSTG on Endoscapes dataset.</p><p><strong>Results: </strong>SwinV2 demonstrated as the best encoder achieving +2.07% and +17.72% mAP over VMamba and ResNet50, respectively. SwinCVS trained end-to-end achieves 64.59% mAP and performs on par with SV2LSTG (64.68% mAP, p=0.470), while its pretrained variant achieves 67.45% mAP showing a significant improvement over the current SOTA.</p><p><strong>Conclusion: </strong>Our proposed solution offers a promising approach for CVS classification, outperforming existing methods and eliminating the need for semantic segmentation masks. Its design supports robust feature extraction and allows for future enhancements through additional tasks that force clinically relevant priors. The results highlight that attention-based architectures like SwinV2 are well suited for surgical image encoding, offering a practical approach for improving automated systems in laparoscopic surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143991413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D reconstruction in endonasal pituitary surgery. 鼻内垂体手术的三维重建。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-11 DOI: 10.1007/s11548-025-03362-9
Dannielle Lee, Laurent Mennillo, Emalee Burrows, Jia-En Chen, Danyal Z Khan, Joachim Starup-Hansen, Danail Stoyanov, Matthew J Clarkson, Hani J Marcus, Sophia Bano
{"title":"3D reconstruction in endonasal pituitary surgery.","authors":"Dannielle Lee, Laurent Mennillo, Emalee Burrows, Jia-En Chen, Danyal Z Khan, Joachim Starup-Hansen, Danail Stoyanov, Matthew J Clarkson, Hani J Marcus, Sophia Bano","doi":"10.1007/s11548-025-03362-9","DOIUrl":"https://doi.org/10.1007/s11548-025-03362-9","url":null,"abstract":"<p><strong>Purpose: </strong>Endoscopic transsphenoidal surgery for pituitary tumors is hindered by limited visibility and maneuverability due to the narrow nasal corridor, increasing the risk of complications. To address these challenges, we present a pipeline for 3D reconstruction of the sellar anatomy from monocular endoscopic videos to enhance intraoperative visualization and navigation.</p><p><strong>Methods: </strong>Data were collected through a user study with trainee surgeons, and the procedure was conducted on 3D printed, anatomically correct phantom devices. To overcome limitations posed by the uniform, textureless surfaces of these devices, learned feature detectors and matchers were leveraged to extract meaningful information from the images. The matched features were reconstructed using COLMAP, and the resulting surfaces were evaluated using the iterative closest point algorithm against the CAD ground-truth surface of the printed phantoms.</p><p><strong>Results: </strong>Most methods resulted in accurate reconstructions with moderate variability in cases with high blur or occlusions. Average RMSE values of 0.33 mm and 0.41 mm, for the two best methods, Dense Kernelized Feature Matching and SuperPoint with LightGlue, respectively, were obtained in the surface registrations across all test sequences, with a significantly higher computation time for Dense Kernelized Feature Matching.</p><p><strong>Conclusion: </strong>The proposed pipeline was able to accurately reconstruct anatomically correct 3D models of the phantom devices, showing potential for the use of learned feature detectors and matchers in real time for AR-guided navigation in pituitary surgery.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heat: high-efficiency simulation for thermal ablation therapy. 热:高效模拟热消融治疗。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-10 DOI: 10.1007/s11548-025-03350-z
Jonas Mehtali, Juan Verde, Caroline Essert
{"title":"Heat: high-efficiency simulation for thermal ablation therapy.","authors":"Jonas Mehtali, Juan Verde, Caroline Essert","doi":"10.1007/s11548-025-03350-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03350-z","url":null,"abstract":"<p><strong>Purpose: </strong>Percutaneous thermal ablation is increasingly popular but still suffers from a complex preoperative planning, especially with multiple needles. Existing planning methods either use theoretical ablation shapes for faster estimates or are computationally intensive when incorporating realistic thermal propagation. This paper introduces a multi-resolution approach that accelerates thermal propagation simulation, enabling users to adjust ablation parameters and see the results in interactive time.</p><p><strong>Methods: </strong>For static needle positions, a high-resolution simulation based on GPU-accelerated implementation of the Pennes bioheat equation is used. During user interaction, intermediate frames display a lower-resolution estimation of the ablated volume. Two methods are compared, based on GPU-accelerated reimplementations of finite difference and lattice Boltzmann approaches. A parameter study was conducted to identify the optimal balance between speed and accuracy for the low- and high-resolution frames. The chosen parameters are finally tested in multi-needle scenarios to validate the interactive capability in this context.</p><p><strong>Results: </strong>Tested with percutaneous radiofrequency data, our multi-resolution method significantly reduces computation time while maintaining good accuracy compared to the reference simulation. For high-resolution frames, we can reach up to 5.8 fps, while for intermediate low-resolution frames we can reach a frame rate of 32 fps with less than 20% loss of accuracy.</p><p><strong>Conclusion: </strong>This multi-resolution approach allows for smooth interaction with multiple needles, with instant visualization of the predicted ablation volume, in the context of percutaneous radiofrequency treatments. It could also be applied to automated planning, reducing the time required for iterative adjustments.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144052356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent control of robotic X-ray devices using a language-promptable digital twin. 使用语言提示数字双胞胎智能控制机器人x射线设备。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-09 DOI: 10.1007/s11548-025-03351-y
Benjamin D Killeen, Anushri Suresh, Catalina Gomez, Blanca Íñigo, Christopher Bailey, Mathias Unberath
{"title":"Intelligent control of robotic X-ray devices using a language-promptable digital twin.","authors":"Benjamin D Killeen, Anushri Suresh, Catalina Gomez, Blanca Íñigo, Christopher Bailey, Mathias Unberath","doi":"10.1007/s11548-025-03351-y","DOIUrl":"https://doi.org/10.1007/s11548-025-03351-y","url":null,"abstract":"<p><strong>Purpose: </strong>Natural language offers a convenient, flexible interface for controlling robotic C-arm X-ray systems, making advanced functionality and controls easily accessible.Please confirm if the author names are presented accurately and in the correct sequence (given name, middle name/initial, family name). Author 1 Given name: [Benjamin D.] Last name [Killeen]. Also, kindly confirm the details in the metadata are correct. However, enabling language interfaces requires specialized artificial intelligence (AI) models that interpret X-ray images to create a semantic representation for language-based reasoning. The fixed outputs of such AI models fundamentally limits the functionality of language controls that users may access. Incorporating flexible and language-aligned AI models that can be prompted through language control facilitates more flexible interfaces for a much wider variety of tasks and procedures.</p><p><strong>Methods: </strong>Using a language-aligned foundation model for X-ray image segmentation, our system continually updates a patient digital twin based on sparse reconstructions of desired anatomical structures. This allows for multiple autonomous capabilities, including visualization, patient-specific viewfinding, and automatic collimation from novel viewpoints, enabling complex language control commands like \"Focus in on the lower lumbar vertebrae.\"</p><p><strong>Results: </strong>In a cadaver study, multiple users were able to visualize, localize, and collimate around structures across the torso region using only verbal commands to control a robotic X-ray system, with 84% end-to-end success. In post hoc analysis of randomly oriented images, our patient digital twin was able to localize 35 commonly requested structures from a given image to within <math><mrow><mn>51.68</mn> <mo>±</mo> <mn>30.84</mn></mrow> </math>  mm, which enables localization and isolation of the object from arbitrary orientations.</p><p><strong>Conclusion: </strong>Overall, we show how intelligent robotic X-ray systems can incorporate physicians' expressed intent directly. Existing foundation models for intra-operative X-ray image analysis exhibit certain failure modes. Nevertheless, our results suggest that as these models become more capable, they can facilitate highly flexible, intelligent robotic C-arms.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144020481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Touching the tumor boundary: a pilot study on ultrasound-based virtual fixtures for breast-conserving surgery. 触摸肿瘤边界:保乳手术中基于超声的虚拟固定装置的初步研究。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-05 DOI: 10.1007/s11548-025-03342-z
Laura Connolly, Tamas Ungi, Adnan Munawar, Anton Deguet, Chris Yeung, Russell H Taylor, Parvin Mousavi, Gabor Fichtinger, Keyvan Hashtrudi-Zaad
{"title":"Touching the tumor boundary: a pilot study on ultrasound-based virtual fixtures for breast-conserving surgery.","authors":"Laura Connolly, Tamas Ungi, Adnan Munawar, Anton Deguet, Chris Yeung, Russell H Taylor, Parvin Mousavi, Gabor Fichtinger, Keyvan Hashtrudi-Zaad","doi":"10.1007/s11548-025-03342-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03342-z","url":null,"abstract":"<p><strong>Purpose: </strong>Delineating tumor boundaries during breast-conserving surgery is challenging as tumors are often highly mobile, non-palpable, and have irregularly shaped borders. To address these challenges, we introduce a cooperative robotic guidance system that applies haptic feedback for tumor localization. In this pilot study, we aim to assess if and how this system can be successfully integrated into breast cancer care.</p><p><strong>Methods: </strong>A small haptic robot is retrofitted with an electrocautery blade to operate as a cooperatively controlled surgical tool. Ultrasound and electromagnetic navigation are used to identify the tumor boundaries and position. A forbidden region virtual fixture is imposed when the surgical tool collides with the tumor boundary. We conducted a study where users were asked to resect tumors from breast simulants both with and without the haptic guidance. We then assess the results of these simulated resections both qualitatively and quantitatively.</p><p><strong>Results: </strong>Virtual fixture guidance is shown to improve resection margins. On average, users find the task to be less mentally demanding, frustrating, and effort intensive when haptic feedback is available. We also discovered some unanticipated impacts on surgical workflow that will guide design adjustments and training protocol moving forward.</p><p><strong>Conclusion: </strong>Our results suggest that virtual fixtures can help localize tumor boundaries in simulated breast-conserving surgery. Future work will include an extensive user study to further validate these results and fine-tune our guidance system.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parametric-MAA: fast, object-centric avoidance of metal artifacts for intraoperative CBCT. 参数化maa:术中CBCT快速、以物体为中心避免金属伪影。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-05 DOI: 10.1007/s11548-025-03348-7
Maximilian Rohleder, Andreas Maier, Bjoern Kreher
{"title":"Parametric-MAA: fast, object-centric avoidance of metal artifacts for intraoperative CBCT.","authors":"Maximilian Rohleder, Andreas Maier, Bjoern Kreher","doi":"10.1007/s11548-025-03348-7","DOIUrl":"https://doi.org/10.1007/s11548-025-03348-7","url":null,"abstract":"<p><strong>Purpose: </strong>Metal artifacts remain a persistent issue in intraoperative CBCT imaging. Particularly in orthopedic and trauma applications, these artifacts obstruct clinically relevant areas around the implant, reducing the modality's clinical value. Metal artifact avoidance (MAA) methods have shown potential to improve image quality through trajectory adjustments, but often fail in clinical practice due to their focus on irrelevant objects and high computational demands. To address these limitations, we introduce the novel parametric metal artifact avoidance (P-MAA) method.</p><p><strong>Methods: </strong>The P-MAA method first detects keypoints in two scout views using a deep learning model. These keypoints are used to model each clinically relevant object as an ellipsoid, capturing its position, extent, and orientation. We hypothesize that fine details of object shapes are less critical for artifact reduction. Based on these ellipsoidal representations, we devise a computationally efficient metric for scoring view trajectories, enabling fast, CPU-based optimization. A detection model for object localization was trained using both simulated and real data and validated on real clinical cases. The scoring method was benchmarked against a raytracing-based approach.</p><p><strong>Results: </strong>The trained detection model achieved a mean average recall of 0.78, demonstrating generalizability to unseen clinical cases. The ellipsoid-based scoring method closely approximated results using raytracing and was effective in complex clinical scenarios. Additionally, the ellipsoid method provided a 33-fold increase in speed, without the need for GPU acceleration.</p><p><strong>Conclusion: </strong>The P-MAA approach provides a feasible solution for metal artifact avoidance in CBCT imaging, enabling fast trajectory optimization while focusing on clinically relevant objects. This method represents a significant step toward practical intraoperative implementation of MAA techniques.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143789270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FACT: foundation model for assessing cancer tissue margins with mass spectrometry. FACT:利用质谱技术评估癌症组织边缘的基础模型。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-04 DOI: 10.1007/s11548-025-03355-8
Mohammad Farahmand, Amoon Jamzad, Fahimeh Fooladgar, Laura Connolly, Martin Kaufmann, Kevin Yi Mi Ren, John Rudan, Doug McKay, Gabor Fichtinger, Parvin Mousavi
{"title":"FACT: foundation model for assessing cancer tissue margins with mass spectrometry.","authors":"Mohammad Farahmand, Amoon Jamzad, Fahimeh Fooladgar, Laura Connolly, Martin Kaufmann, Kevin Yi Mi Ren, John Rudan, Doug McKay, Gabor Fichtinger, Parvin Mousavi","doi":"10.1007/s11548-025-03355-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03355-8","url":null,"abstract":"<p><strong>Purpose: </strong>Accurately classifying tissue margins during cancer surgeries is crucial for ensuring complete tumor removal. Rapid Evaporative Ionization Mass Spectrometry (REIMS), a tool for real-time intraoperative margin assessment, generates spectra that require machine learning models to support clinical decision-making. However, the scarcity of labeled data in surgical contexts presents a significant challenge. This study is the first to develop a foundation model tailored specifically for REIMS data, addressing this limitation and advancing real-time surgical margin assessment.</p><p><strong>Methods: </strong>We propose FACT, a Foundation model for Assessing Cancer Tissue margins. FACT is an adaptation of a foundation model originally designed for text-audio association, pretrained using our proposed supervised contrastive approach based on triplet loss. An ablation study is performed to compare our proposed model against other models and pretraining methods.</p><p><strong>Results: </strong>Our proposed model significantly improves the classification performance, achieving state-of-the-art performance with an AUROC of <math><mrow><mn>82.4</mn> <mo>%</mo> <mo>±</mo> <mn>0.8</mn></mrow> </math> . The results demonstrate the advantage of our proposed pretraining method and selected backbone over the self-supervised and semi-supervised baselines and alternative models.</p><p><strong>Conclusion: </strong>Our findings demonstrate that foundation models, adapted and pretrained using our novel approach, can effectively classify REIMS data even with limited labeled examples. This highlights the viability of foundation models for enhancing real-time surgical margin assessment, particularly in data-scarce clinical environments.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinforcement learning for safe autonomous two-device navigation of cerebral vessels in mechanical thrombectomy. 机械取栓术中安全自主双装置脑血管导航的强化学习。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-03 DOI: 10.1007/s11548-025-03339-8
Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth
{"title":"Reinforcement learning for safe autonomous two-device navigation of cerebral vessels in mechanical thrombectomy.","authors":"Harry Robertshaw, Benjamin Jackson, Jiaheng Wang, Hadi Sadati, Lennart Karstensen, Alejandro Granados, Thomas C Booth","doi":"10.1007/s11548-025-03339-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03339-8","url":null,"abstract":"<p><strong>Purpose: </strong>Autonomous systems in mechanical thrombectomy (MT) hold promise for reducing procedure times, minimizing radiation exposure, and enhancing patient safety. However, current reinforcement learning (RL) methods only reach the carotid arteries, are not generalizable to other patient vasculatures, and do not consider safety. We propose a safe dual-device RL algorithm that can navigate beyond the carotid arteries to cerebral vessels.</p><p><strong>Methods: </strong>We used the Simulation Open Framework Architecture to represent the intricacies of cerebral vessels, and a modified Soft Actor-Critic RL algorithm to learn, for the first time, the navigation of micro-catheters and micro-guidewires. We incorporate patient safety metrics into our reward function by integrating guidewire tip forces. Inverse RL is used with demonstrator data on 12 patient-specific vascular cases.</p><p><strong>Results: </strong>Our simulation demonstrates successful autonomous navigation within unseen cerebral vessels, achieving a 96% success rate, 7.0 s procedure time, and 0.24 N mean forces, well below the proposed 1.5 N vessel rupture threshold.</p><p><strong>Conclusion: </strong>To the best of our knowledge, our proposed autonomous system for MT two-device navigation reaches cerebral vessels, considers safety, and is generalizable to unseen patient-specific cases for the first time. We envisage future work will extend the validation to vasculatures of different complexity and on in vitro models. While our contributions pave the way toward deploying agents in clinical settings, safety and trustworthiness will be crucial elements to consider when proposing new methodology.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143774757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal imaging platform for enhanced tumor resection in neurosurgery: integrating hyperspectral and pCLE technologies. 神经外科肿瘤强化切除的多模态成像平台:整合高光谱和pCLE技术。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-03 DOI: 10.1007/s11548-025-03340-1
Alfie Roddan, Tobias Czempiel, Chi Xu, Haozheng Xu, Alistair Weld, Vadzim Chalau, Giulio Anichini, Daniel S Elson, Stamatia Giannarou
{"title":"Multimodal imaging platform for enhanced tumor resection in neurosurgery: integrating hyperspectral and pCLE technologies.","authors":"Alfie Roddan, Tobias Czempiel, Chi Xu, Haozheng Xu, Alistair Weld, Vadzim Chalau, Giulio Anichini, Daniel S Elson, Stamatia Giannarou","doi":"10.1007/s11548-025-03340-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03340-1","url":null,"abstract":"<p><strong>Purpose: </strong>This work presents a novel multimodal imaging platform that integrates hyperspectral imaging (HSI) and probe-based confocal laser endomicroscopy (pCLE) for improved brain tumor identification during neurosurgery. By combining these two modalities, we aim to enhance surgical navigation, addressing the limitations of using each modality when used independently.</p><p><strong>Methods: </strong>We developed a multimodal imaging platform that integrates HSI and pCLE within an operating microscope setup using computer vision techniques. The system combines real-time, high-resolution HSI for macroscopic tissue analysis with pCLE for cellular-level imaging. The predictions of each modality made using Machine Learning methods are combined to improve tumor identification.</p><p><strong>Results: </strong>Our evaluation of the multimodal system revealed low spatial error, with minimal reprojection discrepancies, ensuring precise alignment between the HSI and pCLE. This combined imaging approach together with our multimodal tissue characterization algorithm significantly improves tumor identification, yielding higher Dice and Recall scores compared to using HSI or pCLE individually.</p><p><strong>Conclusion: </strong>Our multimodal imaging platform represents a crucial first step toward enhancing tumor identification by combining HSI and pCLE modalities for the first time. We highlight improvements in metrics such as the Dice score and Recall, underscoring the potential for further advancements in this area.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143781305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery. 机器人辅助手术中基于变压器模型的手术活动实时识别与预测方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-04-01 Epub Date: 2025-01-12 DOI: 10.1007/s11548-024-03306-9
Ketai Chen, D S V Bandara, Jumpei Arata
{"title":"A real-time approach for surgical activity recognition and prediction based on transformer models in robot-assisted surgery.","authors":"Ketai Chen, D S V Bandara, Jumpei Arata","doi":"10.1007/s11548-024-03306-9","DOIUrl":"10.1007/s11548-024-03306-9","url":null,"abstract":"<p><strong>Purpose: </strong>This paper presents a deep learning approach to recognize and predict surgical activity in robot-assisted minimally invasive surgery (RAMIS). Our primary objective is to deploy the developed model for implementing a real-time surgical risk monitoring system within the realm of RAMIS.</p><p><strong>Methods: </strong>We propose a modified Transformer model with the architecture comprising no positional encoding, 5 fully connected layers, 1 encoder, and 3 decoders. This model is specifically designed to address 3 primary tasks in surgical robotics: gesture recognition, prediction, and end-effector trajectory prediction. Notably, it operates solely on kinematic data obtained from the joints of robotic arm.</p><p><strong>Results: </strong>The model's performance was evaluated on JHU-ISI Gesture and Skill Assessment Working Set dataset, achieving highest accuracy of 94.4% for gesture recognition, 84.82% for gesture prediction, and significantly low distance error of 1.34 mm with a prediction of 1 s in advance. Notably, the computational time per iteration was minimal recorded at only 4.2 ms.</p><p><strong>Conclusion: </strong>The results demonstrated the excellence of our proposed model compared to previous studies highlighting its potential for integration in real-time systems. We firmly believe that our model could significantly elevate realms of surgical activity recognition and prediction within RAS and make a substantial and meaningful contribution to the healthcare sector.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":"743-752"},"PeriodicalIF":2.3,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142973200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信