International Journal of Computer Assisted Radiology and Surgery最新文献

筛选
英文 中文
Virtual fluoroscopy for interventional guidance using magnetic tracking. 利用磁跟踪进行介入引导的虚拟透视。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-16 DOI: 10.1007/s11548-025-03395-0
Shuwei Xing, Inaara Ahmed-Fazal, Utsav Pardasani, Uditha Jayarathne, Scott Illsley, Aaron Fenster, Terry M Peters, Elvis C S Chen
{"title":"Virtual fluoroscopy for interventional guidance using magnetic tracking.","authors":"Shuwei Xing, Inaara Ahmed-Fazal, Utsav Pardasani, Uditha Jayarathne, Scott Illsley, Aaron Fenster, Terry M Peters, Elvis C S Chen","doi":"10.1007/s11548-025-03395-0","DOIUrl":"https://doi.org/10.1007/s11548-025-03395-0","url":null,"abstract":"<p><strong>Purpose: </strong>In conventional fluoroscopy-guided interventions, the 2D projective nature of X-ray imaging limits depth perception and leads to prolonged radiation exposure. Virtual fluoroscopy, combined with spatially tracked surgical instruments, is a promising strategy to mitigate these limitations. While magnetic tracking shows unique advantages, particularly in tracking flexible instruments, it remains under-explored due to interference from ferromagnetic materials in the C-arm room. This work proposes a virtual fluoroscopy workflow by effectively integrating magnetic tracking and demonstrates its clinical efficacy METHODS: An automatic virtual fluoroscopy workflow was developed using a radiolucent tabletop field generator prototype. Specifically, we developed a fluoro-CT registration approach with automatic 2D-3D shared landmark correspondence to establish the C-arm-patient relationship, along with a general C-arm modelling approach to calculate desired poses and generate corresponding virtual fluoroscopic images.</p><p><strong>Results: </strong>Testing on a dataset with views ranging from RAO  <math> <msup><mrow><mn>90</mn></mrow> <mo>∘</mo></msup> </math> to LAO  <math> <msup><mrow><mn>90</mn></mrow> <mo>∘</mo></msup> </math> , simulated fluoroscopic images showed visually imperceptible differences from the real ones, achieving a mean target projection distance error of 1.55 mm. An \"endoleak\" phantom insertion experiment highlighted the effectiveness of simulating multiplanar views with real-time instrument overlays, achieving a mean needle tip error of 3.42 mm.</p><p><strong>Conclusions: </strong>Results demonstrated the efficacy of virtual fluoroscopy integrated with magnetic tracking, improving depth perception during navigation. The broad capture range of virtual fluoroscopy showed promise in improving the users' understanding of X-ray imaging principles, facilitating more efficient image acquisition.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144086552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based approach to automated rib fracture detection and CWIS classification. 基于深度学习的肋骨骨折自动检测和CWIS分类方法。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-16 DOI: 10.1007/s11548-025-03390-5
Victoria Marting, Noor Borren, Max R van Diepen, Esther M M van Lieshout, Mathieu M E Wijffels, Theo van Walsum
{"title":"A deep learning-based approach to automated rib fracture detection and CWIS classification.","authors":"Victoria Marting, Noor Borren, Max R van Diepen, Esther M M van Lieshout, Mathieu M E Wijffels, Theo van Walsum","doi":"10.1007/s11548-025-03390-5","DOIUrl":"https://doi.org/10.1007/s11548-025-03390-5","url":null,"abstract":"<p><strong>Purpose: </strong>Trauma-induced rib fractures are a common injury. The number and characteristics of these fractures influence whether a patient is treated nonoperatively or surgically. Rib fractures are typically diagnosed using CT scans, yet 19.2-26.8% of fractures are still missed during assessment. Another challenge in managing rib fractures is the interobserver variability in their classification. Purpose of this study was to develop and assess an automated method that detects rib fractures in CT scans, and classifies them according to the Chest Wall Injury Society (CWIS) classification.</p><p><strong>Methods: </strong>198 CT scans were collected, of which 170 were used for training and internal validation, and 28 for external validation. Fractures and their classifications were manually annotated in each of the scans. A detection and classification network was trained for each of the three components of the CWIS classifications. In addition, a rib number labeling network was trained for obtaining the rib number of a fracture. Experiments were performed to assess the method performance.</p><p><strong>Results: </strong>On the internal test set, the method achieved a detection sensitivity of 80%, at a precision of 87%, and an F1-score of 83%, with a mean number of FPPS (false positives per scan) of 1.11. Classification sensitivity varied, with the lowest being 25% for complex fractures and the highest being 97% for posterior fractures. The correct rib number was assigned to 94% of the detected fractures. The custom-trained nnU-Net correctly labeled 95.5% of all ribs and 98.4% of fractured ribs in 30 patients. The detection and classification performance on the external validation dataset was slightly better, with a fracture detection sensitivity of 84%, precision of 85%, F1-score of 84%, FPPS of 0.96 and 95% of the fractures were assigned the correct rib number.</p><p><strong>Conclusion: </strong>The method developed is able to accurately detect and classify rib fractures in CT scans, there is room for improvement in the (rare and) underrepresented classes in the training set.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of deep learning with fractal images to sparse-view CT. 分形图像深度学习在稀疏视图CT中的应用。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-15 DOI: 10.1007/s11548-025-03378-1
Ren Kawaguchi, Tomoya Minagawa, Kensuke Hori, Takeyuki Hashimoto
{"title":"Application of deep learning with fractal images to sparse-view CT.","authors":"Ren Kawaguchi, Tomoya Minagawa, Kensuke Hori, Takeyuki Hashimoto","doi":"10.1007/s11548-025-03378-1","DOIUrl":"https://doi.org/10.1007/s11548-025-03378-1","url":null,"abstract":"<p><strong>Purpose: </strong>Deep learning has been widely used in research on sparse-view computed tomography (CT) image reconstruction. While sufficient training data can lead to high accuracy, collecting medical images is often challenging due to legal or ethical concerns, making it necessary to develop methods that perform well with limited data. To address this issue, we explored the use of nonmedical images for pre-training. Therefore, in this study, we investigated whether fractal images could improve the quality of sparse-view CT images, even with a reduced number of medical images.</p><p><strong>Methods: </strong>Fractal images generated by an iterated function system (IFS) were used for nonmedical images, and medical images were obtained from the CHAOS dataset. Sinograms were then generated using 36 projections in sparse-view and the images were reconstructed by filtered back-projection (FBP). FBPConvNet and WNet (first module: learning fractal images, second module: testing medical images, and third module: learning output) were used as networks. The effectiveness of pre-training was then investigated for each network. The quality of the reconstructed images was evaluated using two indices: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR).</p><p><strong>Results: </strong>The network parameters pre-trained with fractal images showed reduced artifacts compared to the network trained exclusively with medical images, resulting in improved SSIM. WNet outperformed FBPConvNet in terms of PSNR. Pre-training WNet with fractal images produced the best image quality, and the number of medical images required for main-training was reduced from 5000 to 1000 (80% reduction).</p><p><strong>Conclusion: </strong>Using fractal images for network training can reduce the number of medical images required for artifact reduction in sparse-view CT. Therefore, fractal images can improve accuracy even with a limited amount of training data in deep learning.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated vision-based assistance tools in bronchoscopy: stenosis severity estimation. 支气管镜检查中基于自动视觉的辅助工具:狭窄严重程度估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-15 DOI: 10.1007/s11548-025-03398-x
Clara Tomasini, Javier Rodriguez-Puigvert, Dinora Polanco, Manuel Viñuales, Luis Riazuelo, Ana C Murillo
{"title":"Automated vision-based assistance tools in bronchoscopy: stenosis severity estimation.","authors":"Clara Tomasini, Javier Rodriguez-Puigvert, Dinora Polanco, Manuel Viñuales, Luis Riazuelo, Ana C Murillo","doi":"10.1007/s11548-025-03398-x","DOIUrl":"https://doi.org/10.1007/s11548-025-03398-x","url":null,"abstract":"<p><strong>Purpose: </strong>Subglottic stenosis refers to the narrowing of the subglottis, the airway between the vocal cords and the trachea. Its severity is typically evaluated by estimating the percentage of obstructed airway. This estimation can be obtained from CT data or through visual inspection by experts exploring the region. However, visual inspections are inherently subjective, leading to less consistent and robust diagnoses. No public methods or datasets are currently available for automated evaluation of this condition from bronchoscopy video.</p><p><strong>Methods: </strong>We propose a pipeline for automated subglottic stenosis severity estimation during the bronchoscopy exploration, without requiring the physician to traverse the stenosed region. Our approach exploits the physical effect of illumination decline in endoscopy to segment and track the lumen and obtain a 3D model of the airway. This 3D model is obtained from a single frame and is used to measure the airway narrowing.</p><p><strong>Results: </strong>Our pipeline is the first to enable automated and robust subglottic stenosis severity measurement using bronchoscopy images. The results show consistency with ground-truth estimations from CT scans and expert estimations and reliable repeatability across multiple estimations on the same patient. Our evaluation is performed on our new Subglottic Stenosis Dataset of real bronchoscopy procedures data.</p><p><strong>Conclusion: </strong>We demonstrate how to automate evaluation of subglottic stenosis severity using only bronchoscopy. Our approach can assist with and shorten diagnosis and monitoring procedures, with automated and repeatable estimations and less exploration time, and save radiation exposure to patients as no CT is required. Additionally, we release the first public benchmark for subglottic stenosis severity assessment.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144080525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A methodological framework for integrating model-guided medicine and multidimensional information management systems: application in anti-aging healthcare. 模型导向医学与多维信息管理系统集成的方法学框架:在抗衰老医疗中的应用。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-15 DOI: 10.1007/s11548-025-03337-w
Hugo Herrero Antón de Vez, Esteban Felez, Mario A Cypko
{"title":"A methodological framework for integrating model-guided medicine and multidimensional information management systems: application in anti-aging healthcare.","authors":"Hugo Herrero Antón de Vez, Esteban Felez, Mario A Cypko","doi":"10.1007/s11548-025-03337-w","DOIUrl":"https://doi.org/10.1007/s11548-025-03337-w","url":null,"abstract":"<p><strong>Purpose: </strong>This study introduces a methodological framework for integrating model-guided medicine (MGM) with multidimensional information management systems (MIMMS) to address healthcare's digital transformation challenges. Anti-aging medicine is used as a case study to illustrate the framework's adaptability and practical application in terms of data integration, workflow transparency, and AI auditability.</p><p><strong>Methods: </strong>The methodology combines MGM and MIMMS to manage complex healthcare data through patient-specific, semantic, and syntactic models. Automated workflows streamline processes from data acquisition to decision-making. Integration is demonstrated with metabolic assessments and patient-specific modeling.</p><p><strong>Results: </strong>The framework effectively integrates multi-domain data, enhancing interoperability, workflow transparency, and AI auditability. A case study in anti-aging medicine illustrates its practical utility and scalability, addressing limitations of existing systems and highlighting potential for broader applications.</p><p><strong>Conclusion: </strong>This methodological framework offers a novel approach to advancing digital healthcare transformation by enabling integrated, patient-centric workflows. While not yet applied in a clinical setting, its conceptual application to anti-aging medicine illustrates the framework's adaptability and potential to enhance healthcare standards across various domains. Future work will focus on real-world validation and refinement to further demonstrate its practical impact.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144081971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automated multimodel segmentation and tracking for AR-guided open liver surgery using scene-aware self-prompting. 基于场景感知自我提示的ar引导下肝脏开放性手术的自动多模型分割和跟踪。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-14 DOI: 10.1007/s11548-025-03381-6
Serouj Khajarian, Michael Schwimmbeck, Konstantin Holzapfel, Johannes Schmidt, Christopher Auer, Stefanie Remmele, Oliver Amft
{"title":"Automated multimodel segmentation and tracking for AR-guided open liver surgery using scene-aware self-prompting.","authors":"Serouj Khajarian, Michael Schwimmbeck, Konstantin Holzapfel, Johannes Schmidt, Christopher Auer, Stefanie Remmele, Oliver Amft","doi":"10.1007/s11548-025-03381-6","DOIUrl":"https://doi.org/10.1007/s11548-025-03381-6","url":null,"abstract":"<p><strong>Purpose: </strong>We introduce a multimodel, real-time semantic segmentation and tracking approach for Augmented Reality (AR)-guided open liver surgery. Our approach leverages foundation models and scene-aware re-prompting strategies to balance segmentation accuracy and inference time as required for real-time AR-assisted surgery applications.</p><p><strong>Methods: </strong>Our approach integrates a domain-specific RGBD model (ESANet), a foundation model for semantic segmentation (SAM), and a semi-supervised video object segmentation model (DeAOT). Models were combined in an auto-promptable pipeline with a scene-aware re-prompting algorithm that adapts to surgical scene changes. We evaluated our approach on intraoperative RGBD videos from 10 open liver surgeries using a head-mounted AR device. Segmentation accuracy (IoU), temporal resolution (FPS), and the impact of re-prompting strategies were analyzed. Comparisons to individual models were performed.</p><p><strong>Results: </strong>Our multimodel approach achieved a median IoU of 71% at 13.2 FPS without re-prompting. Performance of our multimodel approach surpasses that of individual models, yielding better segmentation accuracy than ESANet and better temporal resolution compared to SAM. Our scene-aware re-prompting method reaches the DeAOT performance, with an IoU of 74.7% at 11.5 FPS, even when the DeAOT model uses an ideal reference frame.</p><p><strong>Conclusion: </strong>Our scene-aware re-prompting strategy provides a trade-off between segmentation accuracy and temporal resolution, thus addressing the requirements of real-time AR-guided open liver surgery. The integration of complementary models resulted in robust and accurate segmentation in a complex, real-world surgical settings.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144042828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acquiring submillimeter-accurate multi-task vision datasets for computer-assisted orthopedic surgery. 为计算机辅助骨科手术获取亚毫米精度的多任务视觉数据集。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-14 DOI: 10.1007/s11548-025-03385-2
Emma Most, Jonas Hein, Frédéric Giraud, Nicola A Cavalcanti, Lukas Zingg, Baptiste Brument, Nino Louman, Fabio Carrillo, Philipp Fürnstahl, Lilian Calvet
{"title":"Acquiring submillimeter-accurate multi-task vision datasets for computer-assisted orthopedic surgery.","authors":"Emma Most, Jonas Hein, Frédéric Giraud, Nicola A Cavalcanti, Lukas Zingg, Baptiste Brument, Nino Louman, Fabio Carrillo, Philipp Fürnstahl, Lilian Calvet","doi":"10.1007/s11548-025-03385-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03385-2","url":null,"abstract":"<p><strong>Purpose: </strong>Advances in computer vision, particularly in optical image-based 3D reconstruction and feature matching, enable applications like marker-less surgical navigation and digitization of surgery. However, their development is hindered by a lack of suitable datasets with 3D ground truth. This work explores an approach to generating realistic and accurate ex vivo datasets tailored for 3D reconstruction and feature matching in open orthopedic surgery.</p><p><strong>Methods: </strong>A set of posed images and an accurately registered ground truth surface mesh of the scene are required to develop vision-based 3D reconstruction and matching methods suitable for surgery. We propose a framework consisting of three core steps and compare different methods for each step: 3D scanning, calibration of viewpoints for a set of high-resolution RGB images, and an optical method for scene registration.</p><p><strong>Results: </strong>We evaluate each step of this framework on an ex vivo scoliosis surgery using a pig spine, conducted under real operating room conditions. A mean 3D Euclidean error of 0.35 mm is achieved with respect to the 3D ground truth.</p><p><strong>Conclusion: </strong>The proposed method results in submillimeter-accurate 3D ground truths and surgical images with a spatial resolution of 0.1 mm. This opens the door to acquiring future surgical datasets for high-precision applications.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143998207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When do they StOP?: A first step toward automatically identifying team communication in the operating room. 他们什么时候停止?:自动识别手术室团队沟通的第一步。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-14 DOI: 10.1007/s11548-025-03393-2
Keqi Chen, Lilien Schewski, Vinkle Srivastav, Joël Lavanchy, Didier Mutter, Guido Beldi, Sandra Keller, Nicolas Padoy
{"title":"When do they StOP?: A first step toward automatically identifying team communication in the operating room.","authors":"Keqi Chen, Lilien Schewski, Vinkle Srivastav, Joël Lavanchy, Didier Mutter, Guido Beldi, Sandra Keller, Nicolas Padoy","doi":"10.1007/s11548-025-03393-2","DOIUrl":"https://doi.org/10.1007/s11548-025-03393-2","url":null,"abstract":"<p><strong>Purpose: </strong>Surgical performance depends not only on surgeons' technical skills, but also on team communication within and across the different professional groups present during the operation. Therefore, automatically identifying team communication in the OR is crucial for patient safety and advances in the development of computer-assisted surgical workflow analysis and intra-operative support systems. To take the first step, we propose a new task of detecting communication briefings involving all OR team members, i.e., the team Time-out and the StOP?-protocol, by localizing their start and end times in video recordings of surgical operations.</p><p><strong>Methods: </strong>We generate an OR dataset of real surgeries, called Team-OR, with more than one hundred hours of surgical videos captured by the multi-view camera system in the OR. The dataset contains temporal annotations of 33 Time-out and 22 StOP?-protocol activities in total. We then propose a novel group activity detection approach, where we encode both scene context and action features, and use an efficient neural network model to output the results.</p><p><strong>Results: </strong>The experimental results on the Team-OR dataset show that our approach outperforms existing state-of-the-art temporal action detection approaches. It also demonstrates the lack of research on group activities in the OR, proving the significance of our dataset.</p><p><strong>Conclusion: </strong>We investigate the Team Time-Out and the StOP?-protocol in the OR, by presenting the first OR dataset with temporal annotations of group activities protocols, and introducing a novel group activity detection approach that outperforms existing approaches. Code is available at https://github.com/CAMMA-public/Team-OR .</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144040322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SHADeS: self-supervised monocular depth estimation through non-Lambertian image decomposition. 阴影:通过非朗伯图像分解的自监督单眼深度估计。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-13 DOI: 10.1007/s11548-025-03371-8
Rema Daher, Francisco Vasconcelos, Danail Stoyanov
{"title":"SHADeS: self-supervised monocular depth estimation through non-Lambertian image decomposition.","authors":"Rema Daher, Francisco Vasconcelos, Danail Stoyanov","doi":"10.1007/s11548-025-03371-8","DOIUrl":"https://doi.org/10.1007/s11548-025-03371-8","url":null,"abstract":"<p><strong>Purpose: </strong>Visual 3D scene reconstruction can support colonoscopy navigation. It can help in recognising which portions of the colon have been visualised and characterising the size and shape of polyps. This is still a very challenging problem due to complex illumination variations, including abundant specular reflections. We investigate how to effectively decouple light and depth in this problem.</p><p><strong>Methods: </strong>We introduce a self-supervised model that simultaneously characterises the shape and lighting of the visualised colonoscopy scene. Our model estimates shading, albedo, depth, and specularities (SHADeS) from single images. Unlike previous approaches (IID (Li et al. IEEE J Biomed Health Inform https://doi.org/10.1109/JBHI.2024.3400804 , 2024)), we use a non-Lambertian model that treats specular reflections as a separate light component. The implementation of our method is available at https://github.com/RemaDaher/SHADeS .</p><p><strong>Results: </strong>We demonstrate on real colonoscopy images (Hyper Kvasir) that previous models for light decomposition (IID) and depth estimation (MonoViT, ModoDepth2) are negatively affected by specularities. In contrast, SHADeS can simultaneously produce light decomposition and depth maps that are robust to specular regions. We also perform a quantitative comparison on phantom data (C3VD) where we further demonstrate the robustness of our model.</p><p><strong>Conclusion: </strong>Modelling specular reflections improves depth estimation in colonoscopy. We propose an effective self-supervised approach that uses this insight to jointly estimate light decomposition and depth. Light decomposition has the potential to help with other problems, such as place recognition within the colon.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144020482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimizing registration uncertainty visualization to support intraoperative decision-making during brain tumor resection. 优化配准不确定性可视化,支持脑肿瘤切除术中决策。
IF 2.3 3区 医学
International Journal of Computer Assisted Radiology and Surgery Pub Date : 2025-05-13 DOI: 10.1007/s11548-025-03407-z
M Geshvadi, R Dorent, C Galvin, L Rigolo, N Haouchine, T Kapur, S Pieper, M Vangel, W M Wells, A J Golby, D Haehn, S Frisken
{"title":"Optimizing registration uncertainty visualization to support intraoperative decision-making during brain tumor resection.","authors":"M Geshvadi, R Dorent, C Galvin, L Rigolo, N Haouchine, T Kapur, S Pieper, M Vangel, W M Wells, A J Golby, D Haehn, S Frisken","doi":"10.1007/s11548-025-03407-z","DOIUrl":"https://doi.org/10.1007/s11548-025-03407-z","url":null,"abstract":"<p><strong>Purpose: </strong>Neurosurgeons need to precisely localize and resect tumors without damaging critical brain tissue. However, deformation of the brain (i.e., 'brain shift') and other factors introduce uncertainty during image-guided surgery. We present a new visualization software that supports qualitative and quantitative exploration of the effectiveness of a broad range of methods for communicating uncertainty. We expect that the ability to visualize uncertainty during surgery will help surgeons better understand uncertainty in neuronavigation and make more informed decisions.</p><p><strong>Methods: </strong>We developed UVisExplore, a software module for exploring various visualization techniques for understanding the spatial distribution of uncertainty in image registration. UVisExplore incorporates multiple classic uncertainty visualization techniques and introduces two novel paradigms appropriate for surgical environments. We also introduce a novel game-based approach to evaluate visualization effectiveness before surgery. The game scenario emulates the cognitive decision-making process during tumor resection allowing quantitative evaluation of visualization effectiveness in a non-threatening environment while training neurosurgeons to better understand uncertainty.</p><p><strong>Results: </strong>Six clinicians and three computer scientists participated in a study using our game. Participants explored different uncertainty visualization techniques in a tumor resection task and provided feedback. Surgeon-participants preferred surgeon-centric approaches, which emphasize uncertainty near the surgical probe. They also preferred explicit numerical measures of uncertainty displayed in millimeters. The game provided valuable insights into uncertainty visualization preferences and interpretation.</p><p><strong>Conclusions: </strong>We provide an open-source 3D Slicer module for visualizing registration uncertainty and a game that allows users to explore uncertainty visualization for tumor resection surgery. UVisExplore provides a platform for exploring and comparing various uncertainty visualization techniques while simulating the decision-making process during surgery. The visualization module and the game proved to be a valuable communication tool and helped narrow the field of candidate visualizations that we plan to test during surgical procedures in the next phase of our research.</p>","PeriodicalId":51251,"journal":{"name":"International Journal of Computer Assisted Radiology and Surgery","volume":" ","pages":""},"PeriodicalIF":2.3,"publicationDate":"2025-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144059351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信