Neurosurgical focus最新文献

筛选
英文 中文
A multiregional multimodal machine learning model for predicting outcome of surgery for symptomatic hemorrhagic brainstem cavernous malformations. 用于预测症状性出血性脑干海绵状畸形手术结果的多区域多模式机器学习模型。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS24778
Xuchen Dong, Haohuai Gui, Kai Quan, Zongze Li, Ying Xiao, Jiaxi Zhou, Yuchuan Zhao, Dongdong Wang, Mingjian Liu, Haojing Duan, Shaoxuan Yang, Xiaolei Lin, Jun Dong, Lin Wang, Yu Ma, Wei Zhu
{"title":"A multiregional multimodal machine learning model for predicting outcome of surgery for symptomatic hemorrhagic brainstem cavernous malformations.","authors":"Xuchen Dong, Haohuai Gui, Kai Quan, Zongze Li, Ying Xiao, Jiaxi Zhou, Yuchuan Zhao, Dongdong Wang, Mingjian Liu, Haojing Duan, Shaoxuan Yang, Xiaolei Lin, Jun Dong, Lin Wang, Yu Ma, Wei Zhu","doi":"10.3171/2025.4.FOCUS24778","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS24778","url":null,"abstract":"<p><strong>Objective: </strong>Given that resection of brainstem cavernous malformations (BSCMs) ends hemorrhaging but carries a high risk of neurological deficits, it is necessary to develop and validate a model predicting surgical outcomes. This study aimed to construct a BSCM surgery outcome prediction model based on clinical characteristics and T2-weighted MRI-based radiomics.</p><p><strong>Methods: </strong>Two separate cohorts of patients undergoing BSCM resection were included as discovery and validation sets. Patient characteristics and imaging data were analyzed. An unfavorable outcome was defined as a modified Rankin Scale score > 2 at the 12-month follow-up. Image features were extracted from regions of interest within lesions and adjacent brainstem. A nomogram was constructed using the risk score from the optimal model.</p><p><strong>Results: </strong>The discovery and validation sets comprised 218 and 49 patients, respectively (mean age 40 ± 14 years, 127 females); 63 patients in the discovery set and 35 in the validation set had an unfavorable outcome. The eXtreme Gradient Boosting imaging model with selected radiomics features achieved the best performance (area under the receiver operating characteristic curve [AUC] 0.82). Patients were stratified into high- and low-risk groups based on risk scores computed from this model (optimal cutoff 0.37). The final integrative multimodal prognostic model attained an AUC of 0.90, surpassing both the imaging and clinical models alone.</p><p><strong>Conclusions: </strong>Inclusion of BSCM and brainstem subregion imaging data in machine learning models yielded significant predictive capability for unfavorable postoperative outcomes. The integration of specific clinical features enhanced prediction accuracy.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E7"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based clinical decision support system for intracerebral hemorrhage: an imaging-based AI-driven framework for automated hematoma segmentation and trajectory planning. 基于深度学习的脑出血临床决策支持系统:基于图像的人工智能驱动的自动血肿分割和轨迹规划框架。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.5.FOCUS25246
Zhichao Gan, Xinghua Xu, Fangye Li, Ron Kikinis, Jiashu Zhang, Xiaolei Chen
{"title":"Deep learning-based clinical decision support system for intracerebral hemorrhage: an imaging-based AI-driven framework for automated hematoma segmentation and trajectory planning.","authors":"Zhichao Gan, Xinghua Xu, Fangye Li, Ron Kikinis, Jiashu Zhang, Xiaolei Chen","doi":"10.3171/2025.5.FOCUS25246","DOIUrl":"https://doi.org/10.3171/2025.5.FOCUS25246","url":null,"abstract":"<p><strong>Objective: </strong>Intracerebral hemorrhage (ICH) remains a critical neurosurgical emergency with high mortality and long-term disability. Despite advancements in minimally invasive techniques, procedural precision remains limited by hematoma complexity and resource disparities, particularly in underserved regions where 68% of global ICH cases occur. Therefore, the authors aimed to introduce a deep learning-based decision support and planning system to democratize surgical planning and reduce operator dependence.</p><p><strong>Methods: </strong>A retrospective cohort of 347 patients (31,024 CT slices) from a single hospital (March 2016-June 2024) was analyzed. The framework integrated nnU-Net-based hematoma and skull segmentation, CT reorientation via ocular landmarks (mean angular correction 20.4° [SD 8.7°]), safety zone delineation with dual anatomical corridors, and trajectory optimization prioritizing maximum hematoma traversal and critical structure avoidance. A validated scoring system was implemented for risk stratification.</p><p><strong>Results: </strong>With the artificial intelligence (AI)-driven system, the automated segmentation accuracy reached clinical-grade performance (Dice similarity coefficient 0.90 [SD 0.14] for hematoma and 0.99 [SD 0.035] for skull), with strong interrater reliability (intraclass correlation coefficient 0.91). For trajectory planning of supratentorial hematomas, the system achieved a low-risk trajectory in 80.8% (252/312) and a moderate-risk trajectory in 15.4% (48/312) of patients, while replanning was required due to high-risk designations in 3.8% of patients (12/312).</p><p><strong>Conclusions: </strong>This AI-driven system demonstrated robust efficacy for supratentorial ICH, addressing 60% of prevalent hemorrhage subtypes. While limitations remain in infratentorial hematomas, this novel automated hematoma segmentation and surgical planning system could be helpful in assisting less-experienced neurosurgeons with limited resources in primary healthcare settings.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E5"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning approaches for predicting prolonged hospital length of stay after lumbar fusion surgery in patients aged 75 years and older: a retrospective cohort study based on comprehensive geriatric assessment. 机器学习方法预测75岁及以上患者腰椎融合术后住院时间延长:一项基于综合老年评估的回顾性队列研究
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS24614
Qijun Wang, Shuaikang Wang, Peng Wang, Shibao Lu
{"title":"Machine learning approaches for predicting prolonged hospital length of stay after lumbar fusion surgery in patients aged 75 years and older: a retrospective cohort study based on comprehensive geriatric assessment.","authors":"Qijun Wang, Shuaikang Wang, Peng Wang, Shibao Lu","doi":"10.3171/2025.4.FOCUS24614","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS24614","url":null,"abstract":"<p><strong>Objective: </strong>Postoperative recovery following lumbar fusion surgery in patients aged 75 years and older often requires a prolonged length of stay (PLOS) in the hospital. Accurately predicting the risk of PLOS and assessing its risk factors for preoperative optimization are crucial to guide clinical decision-making. The aim of this study was to select the risk factors for PLOS and develop a machine learning (ML) model to estimate the likelihood of PLOS based on comprehensive geriatric assessment (CGA) domains in older patients undergoing lumbar fusion surgery.</p><p><strong>Methods: </strong>An observational cohort of 242 patients aged ≥ 75 years (median age 80 years) undergoing lumbar fusion surgery at a single center from March 2019 to December 2021 was retrospectively reviewed. Predictor variables consisted of clinical characteristics, CGA variables, and intraoperative variables. The primary outcome was PLOS, defined as a hospital LOS above the 75th percentile in the overall study population. Patients were randomly divided into two groups (7:3) for model training and validation. Ensemble ML algorithms were used to select the significant variables associated with PLOS, and 9 ML models were used to develop predictive models. The Shapley Additive Explanations (SHAP) method was used for model interpretation and feature importance ranking.</p><p><strong>Results: </strong>Three ensemble ML algorithms selected 9 CGA and clinical variables as influential factors of PLOS. The random forest (RF) model had the best predictive performance among the models evaluated, with an area under the receiver operating characteristic curve of 0.822 (95% CI 0.727-0.917) and F1-score of 0.571. SHAP values indicated that the duration of surgery, the number of fusion levels, and age were the most important predictors, while the Fried frailty phenotype was the most important CGA variable for PLOS. The RF model and its SHAP interpretations were deployed online for clinical utility.</p><p><strong>Conclusions: </strong>This ML model could facilitate individual risk prediction and risk factor identification for PLOS in older patients undergoing lumbar fusion surgery, with the potential to improve preoperative optimization.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E16"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep learning system for automated diagnosis and grading of lumbar spinal stenosis based on spine MRI: model development and validation. 一种基于脊柱MRI的腰椎管狭窄自动诊断和分级的新型深度学习系统:模型开发和验证。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS24670
Tianyi Wang, Aobo Wang, Yiling Zhang, Xingyu Liu, Ning Fan, Shuo Yuan, Peng Du, Qichao Wu, Ruiyuan Chen, Yu Xi, Zhao Gu, Qi Fei, Lei Zang
{"title":"A novel deep learning system for automated diagnosis and grading of lumbar spinal stenosis based on spine MRI: model development and validation.","authors":"Tianyi Wang, Aobo Wang, Yiling Zhang, Xingyu Liu, Ning Fan, Shuo Yuan, Peng Du, Qichao Wu, Ruiyuan Chen, Yu Xi, Zhao Gu, Qi Fei, Lei Zang","doi":"10.3171/2025.4.FOCUS24670","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS24670","url":null,"abstract":"<p><strong>Objective: </strong>The study aimed to develop a single-stage deep learning (DL) screening system for automated binary and multiclass grading of lumbar central stenosis (LCS), lateral recess stenosis (LRS), and lumbar foraminal stenosis (LFS).</p><p><strong>Methods: </strong>Consecutive inpatients who underwent lumbar MRI at our center were retrospectively reviewed for the internal dataset. Axial and sagittal lumbar MRI scans were collected. Based on a new MRI diagnostic criterion, all MRI studies were labeled by two spine specialists and calibrated by a third spine specialist to serve as reference standard. Furthermore, two spine clinicians labeled all MRI studies independently to compare interobserver reliability with the DL model. Samples were assigned into training, validation, and test sets at a proportion of 8:1:1. Additional patients from another center were enrolled as the external test dataset. A modified single-stage YOLOv5 network was designed for simultaneous detection of regions of interest (ROIs) and grading of LCS, LRS, and LFS. Quantitative evaluation metrics of exactitude and reliability for the model were computed.</p><p><strong>Results: </strong>In total, 420 and 50 patients were enrolled in the internal and external datasets. High recalls of 97.4%-99.8% were achieved for ROI detection of lumbar spinal stenosis (LSS). The system revealed multigrade area under curve (AUC) values of 0.93-0.97 in the internal test set and 0.85-0.94 in the external test set for LCS, LRS, and LFS. In binary grading, the DL model achieved high sensitivities of 0.97 for LCS, 0.98 for LRS, and 0.96 for LFS, slightly better than those achieved by spine clinicians in the internal test set. In the external test set, the binary sensitivities were 0.98 for LCS, 0.96 for LRS, and 0.95 for LFS. For reliability assessment, the kappa coefficients between the DL model and reference standard were 0.92, 0.88, and 0.91 for LCS, LRS, and LFS, respectively, slightly higher than those evaluated by nonexpert spine clinicians.</p><p><strong>Conclusions: </strong>The authors designed a novel DL system that demonstrated promising performance, especially in sensitivity, for automated diagnosis and grading of different types of lumbar spinal stenosis using spine MRI. The reliability of the system was better than that of spine surgeons. The authors' system may serve as a triage tool for LSS to reduce misdiagnosis and optimize routine processes in clinical work.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E6"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction. Artificial intelligence in neurosurgery: transforming a data-intensive specialty. 介绍。神经外科中的人工智能:转变数据密集型专业。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS24674
Benjamin S Hopkins, Garnette R Sutherland, Samuel R Browd, Daniel A Donoho, Eric K Oermann, Clemens M Schirmer, Brenton Pennicooke, Wael F Asaad
{"title":"Introduction. Artificial intelligence in neurosurgery: transforming a data-intensive specialty.","authors":"Benjamin S Hopkins, Garnette R Sutherland, Samuel R Browd, Daniel A Donoho, Eric K Oermann, Clemens M Schirmer, Brenton Pennicooke, Wael F Asaad","doi":"10.3171/2025.4.FOCUS24674","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS24674","url":null,"abstract":"","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E1"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Phenotype-driven risk stratification of cerebral aneurysms using Shapley Additive Explanations-based supervised clustering: a novel approach to rupture prediction. 基于Shapley加性解释的监督聚类的脑动脉瘤表型驱动风险分层:一种预测破裂的新方法。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS241024
Shrinit Babel, Syed R H Peeran
{"title":"Phenotype-driven risk stratification of cerebral aneurysms using Shapley Additive Explanations-based supervised clustering: a novel approach to rupture prediction.","authors":"Shrinit Babel, Syed R H Peeran","doi":"10.3171/2025.4.FOCUS241024","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS241024","url":null,"abstract":"<p><strong>Objective: </strong>The aim of this study was to address the limitations of traditional aneurysm risk scoring systems and computational fluid dynamics (CFD) analyses by applying a supervised clustering framework to identify distinct aneurysm phenotypes and improve rupture risk prediction.</p><p><strong>Methods: </strong>Geometric and morphological data for 103 cerebral aneurysms were obtained from the AneuriskWeb dataset. To segment the cerebral aneurysm data into information-dense clusters that relate to aneurysm rupture risk, the authors trained an Extreme Gradient Boosting model for Shapley Additive Explanations (SHAP)-based feature attribution followed by nonlinear dimensionality reduction. Hierarchical Density-based Spatial Clustering of Applications with Noise (HDBSCAN) was then used on the SHAP-transformed feature space to identify clusters that were, subsequently, interpreted directly using rule-based machine learning and indirectly with phenotype visualization.</p><p><strong>Results: </strong>The initial SHAP analysis identified the parent vessel diameter, neck vessel angle, and the cross-sectional area along the centerline of the sac as the most significant predictors of rupture risk. Clustering revealed three distinct aneurysm phenotypes with a high degree of separation (Silhouette score = 0.915). Cluster α, characterized by parent vessel diameters > 3.08 mm and elongated geometries, was a low-risk phenotype with a 4.16% rupture rate. Cluster β only included ruptured aneurysms, with vessel diameters ≤ 1.65 mm and nonspherical structures. Cluster γ represented a mixed-risk aneurysm phenotype (rupture rate of 45.45%), with intermedial vessel diameters (range 1.65-3.08 mm); acute neck angles (< 90°) increased the rupture rate within this cluster.</p><p><strong>Conclusions: </strong>The supervised clustering identified distinct cerebral aneurysm phenotypes, balancing granularity with interpretability in CFD data analysis. Future studies should build on these phenotype-driven insights with temporal analyses and larger datasets for validation, as well as an end-to-end framework to enhance scalability.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E3"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Streamlining microsurgical procedures: a phantom trial of an artificial intelligence-driven robotic microscope assistant. 简化显微外科手术程序:人工智能驱动的机器人显微镜助手的模拟试验。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS25142
Michael Murek, Markus Philipp, Marielena Gutt-Will, Andrea Maria Mathis, David Bervini, Stefan Saur, Franziska Mathis-Ullrich, Andreas Raabe
{"title":"Streamlining microsurgical procedures: a phantom trial of an artificial intelligence-driven robotic microscope assistant.","authors":"Michael Murek, Markus Philipp, Marielena Gutt-Will, Andrea Maria Mathis, David Bervini, Stefan Saur, Franziska Mathis-Ullrich, Andreas Raabe","doi":"10.3171/2025.4.FOCUS25142","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS25142","url":null,"abstract":"<p><strong>Objective: </strong>Surgical microscopes are essential in microsurgery for magnification, focus, and illumination. However, surgeons must frequently adjust the microscope manually-typically via a handgrip or mouth switch-to maintain a well-centered view that ensures clear visibility of the operative field and surrounding anatomy. These frequent adjustments can disrupt surgical workflow, increase cognitive load, and divert surgeons' focus from their surgical task. To address these challenges, the authors introduced and evaluated a novel robotic assistance system that leverages AI to automatically detect the surgical area of interest by localizing surgical instrument tips and robotically recentering the microscope's field of view.</p><p><strong>Methods: </strong>This preclinical user study with 19 neurosurgeons compared the robotic assistance system with state-of-the-art microscope controls, i.e., a handgrip and mouth switch. Participants engaged in a custom-designed microsurgical scenario involving a phantom-based anastomosis requiring frequent microscope adjustments. Task load related to microscope handling was assessed using the National Aeronautics and Space Administration Task Load Index questionnaire, and efficiency and workflow compatibility were analyzed based on suturing time and interruption frequency. To evaluate the effectiveness of the robotic assistance system in maintaining a centered view, heat maps that visualize the areas where surgeons operated with their instrument tips were computed.</p><p><strong>Results: </strong>The robotic assistance system significantly reduced microscope-associated task load compared to the handgrip, decreasing physical (r = 0.59, p < 0.001) and temporal (r = 0.49, p = 0.022) demand while enhancing microscope handling performance (r = 0.40, p = 0.003). In comparison to the mouth switch, reductions in physical (r = 0.45, p = 0.002) and mental (r = 0.32, p = 0.031) demand were observed, alongside performance improvements (r = 0.41, p = 0.008). Furthermore, robotic assistance increased effective suturing time by approximately 10% (r = 0.90, p < 0.001), reduced interruptions (r = 0.52, p = 0.035), and enabled faster reaction times when readjusting the microscope (r = 0.68, p = 0.005) in contrast to the handgrip. According to the heat map analysis, the robotic assistance system consistently promoted a more centered microscope view compared with manual controls.</p><p><strong>Conclusions: </strong>The novel robotic assistance system enhances microsurgical efficiency by AI-assisted microscope adjustments, thereby reducing task load and streamlining workflow. Compared to manual microscope control, automating microscope adjustments minimizes distractions and task switching, allowing surgeons to maintain a consistently centered view of the operative field. Future studies should focus on clinical validation in live surgeries.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E2"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures. 从双平面x线片生成合成ct样脊柱成像:不同深度学习架构的比较。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS25170
Massimo Bottini, Olivier Zanier, Raffaele Da Mutten, Maria L Gandia-Gonzalez, Erik Edström, Adrian Elmi-Terander, Luca Regli, Carlo Serra, Victor E Staartjes
{"title":"Generation of synthetic CT-like imaging of the spine from biplanar radiographs: comparison of different deep learning architectures.","authors":"Massimo Bottini, Olivier Zanier, Raffaele Da Mutten, Maria L Gandia-Gonzalez, Erik Edström, Adrian Elmi-Terander, Luca Regli, Carlo Serra, Victor E Staartjes","doi":"10.3171/2025.4.FOCUS25170","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS25170","url":null,"abstract":"<p><strong>Objective: </strong>This study compared two deep learning architectures-generative adversarial networks (GANs) and convolutional neural networks combined with implicit neural representations (CNN-INRs)-for generating synthetic CT (sCT) images of the spine from biplanar radiographs. The aim of the study was to identify the most robust and clinically viable approach for this potential intraoperative imaging technique.</p><p><strong>Methods: </strong>A spine CT dataset of 216 training and 54 validation cases was used. Digitally reconstructed radiographs (DRRs) served as 2D inputs for training both models under identical conditions for 170 epochs. Evaluation metrics included the Structural Similarity Index Measure (SSIM), peak signal-to-noise ratio (PSNR), and cosine similarity (CS), complemented by qualitative assessments of anatomical fidelity.</p><p><strong>Results: </strong>The GAN model achieved a mean SSIM of 0.932 ± 0.015, PSNR of 19.85 ± 1.40 dB, and CS of 0.671 ± 0.177. The CNN-INR model demonstrated a mean SSIM of 0.921 ± 0.015, PSNR of 21.96 ± 1.20 dB, and CS of 0.707 ± 0.114. Statistical analysis revealed significant differences for SSIM (p = 0.001) and PSNR (p < 0.001), while CS differences were not statistically significant (p = 0.667). Qualitative evaluations consistently favored the GAN model, which produced more anatomically detailed and visually realistic sCT images.</p><p><strong>Conclusions: </strong>This study demonstrated the feasibility of generating spine sCT images from biplanar radiographs using GAN and CNN-INR models. While neither model achieved clinical-grade outputs, the GAN architecture showed greater potential for generating anatomically accurate and visually realistic images. These findings highlight the promise of sCT image generation from biplanar radiographs as an innovative approach to reducing radiation exposure and improving imaging accessibility, with GANs emerging as the more promising avenue for further research and clinical integration.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E13"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-source AI-assisted rapid 3D color multimodal image fusion and preoperative augmented reality planning of extracerebral tumors. 开源ai辅助的快速三维彩色多模态图像融合及脑外肿瘤术前增强现实规划。
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS24557
Xiaolin Hou, Xiaoling Liao, Ruxiang Xu, Fan Fei, Bo Wu
{"title":"Open-source AI-assisted rapid 3D color multimodal image fusion and preoperative augmented reality planning of extracerebral tumors.","authors":"Xiaolin Hou, Xiaoling Liao, Ruxiang Xu, Fan Fei, Bo Wu","doi":"10.3171/2025.4.FOCUS24557","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS24557","url":null,"abstract":"<p><strong>Objective: </strong>This study aimed to develop an advanced method for preoperative planning and surgical guidance using open-source artificial intelligence (AI)-assisted rapid 3D color multimodal image fusion (MIF) and augmented reality (AR) in extracerebral tumor surgical procedures.</p><p><strong>Methods: </strong>In this prospective trial of 130 patients with extracerebral tumors, the authors implemented a novel workflow combining FastSurfer (AI-based brain parcellation), Raidionics-Slicer (deep learning tumor segmentation), and Sina AR projection. Comparative analysis between AI-assisted 3D-color MIF (group A) and manual-3D-monochrome MIF (group B) was conducted, evaluating surgical parameters (operative time, blood loss, resection completeness), clinical outcomes (complications, hospital stay, modified Rankin Scale [mRS] scores), and technical performance metrics (processing time, Dice similarity coefficient [DSC], 95% Hausdorff distance [HD]).</p><p><strong>Results: </strong>The AI-3D-color MIF system achieved superior technical performance with brain segmentation in 1.21 ± 0.13 minutes (vs 4.51 ± 0.15 minutes for manual segmentation), demonstrating exceptional accuracy (DSC 0.978 ± 0.012 vs 0.932 ± 0.029; 95% HD 1.51 ± 0.23 mm vs 3.52 ± 0.35 mm). Clinically, group A demonstrated significant advantages with shorter operative duration, reduced intraoperative blood loss, higher rate of gross-total resection, lower complication incidence, and better postoperative mRS scores (all p < 0.05).</p><p><strong>Conclusions: </strong>The integration of open-source AI tools (FastSurfer/Raidionics) with AR visualization creates an efficient 3D-color MIF workflow that enhances anatomical understanding through color-coded functional mapping and vascular relationship visualization. This system significantly improves surgical precision while reducing perioperative risks, representing a cost-effective solution for advanced neurosurgical planning in resource-constrained settings.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E12"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot segmentation of spinal vertebrae with metastatic lesions: an analysis of Meta's Segment Anything Model 2 and factors affecting learning free segmentation. 转移性椎骨的零射击分割:Meta's Segment Anything Model 2及影响学习自由分割的因素分析
IF 3.3 2区 医学
Neurosurgical focus Pub Date : 2025-07-01 DOI: 10.3171/2025.4.FOCUS25234
Rushmin Khazanchi, Sachin Govind, Rishi Jain, Rebecca Du, Nader S Dahdaleh, Christopher S Ahuja, Najib El Tecle
{"title":"Zero-shot segmentation of spinal vertebrae with metastatic lesions: an analysis of Meta's Segment Anything Model 2 and factors affecting learning free segmentation.","authors":"Rushmin Khazanchi, Sachin Govind, Rishi Jain, Rebecca Du, Nader S Dahdaleh, Christopher S Ahuja, Najib El Tecle","doi":"10.3171/2025.4.FOCUS25234","DOIUrl":"https://doi.org/10.3171/2025.4.FOCUS25234","url":null,"abstract":"<p><strong>Objective: </strong>Accurate vertebral segmentation is an important step in imaging analysis pipelines for diagnosis and subsequent treatment of spinal metastases. Segmenting these metastases is especially challenging given their radiological heterogeneity. Conventional approaches for segmenting vertebrae have included manual review or deep learning; however, manual review is time-intensive with interrater reliability issues, while deep learning requires large datasets to build. The rise of generative AI, notably tools such as Meta's Segment Anything Model 2 (SAM 2), holds promise in its ability to rapidly generate segmentations of any image without pretraining (zero-shot). The authors of this study aimed to assess the ability of SAM 2 to segment vertebrae with metastases.</p><p><strong>Methods: </strong>A publicly available set of spinal CT scans from The Cancer Imaging Archive was used, which included patient sex, BMI, vertebral locations, types of metastatic lesion (lytic, blastic, or mixed), and primary cancer type. Ground-truth segmentations for each vertebra, derived by neuroradiologists, were further extracted from the dataset. SAM 2 then produced segmentations for each vertebral slice without any training data, all of which were compared to gold standard segmentations using the Dice similarity coefficient (DSC). Relative performance differences were assessed across clinical subgroups using standard statistical techniques.</p><p><strong>Results: </strong>Imaging data were extracted for 55 patients and 779 unique thoracolumbar vertebrae, 167 of which had metastatic tumor involvement. Across these vertebrae, SAM 2 had a mean volumetric DSC of 0.833 ± 0.053. SAM 2 performed significantly worse on thoracic vertebrae relative to lumbar vertebrae, female patients relative to male patients, and obese patients relative to non-obese patients.</p><p><strong>Conclusions: </strong>These results demonstrate that general-purpose segmentation models like SAM 2 can provide reasonable vertebral segmentation accuracy with no pretraining, with efficacy comparable to previously published trained models. Future research should include optimizations of spine segmentation models for vertebral location and patient body habitus, as well as for variations in imaging quality approaches.</p>","PeriodicalId":19187,"journal":{"name":"Neurosurgical focus","volume":"59 1","pages":"E18"},"PeriodicalIF":3.3,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144541498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信