Brain Informatics最新文献

筛选
英文 中文
Benchmarking resting state fMRI connectivity pipelines for classification: robust accuracy despite processing variability in cross-site eye state prediction. 静息状态fMRI连接管道的基准分类:尽管在跨站点眼状态预测中处理变异性,但具有强大的准确性。
IF 4.5
Brain Informatics Pub Date : 2026-05-05 DOI: 10.1186/s40708-026-00305-1
Tatiana Medvedeva, Irina Knyazeva, Ruslan Masharipov, Alexander Korotkov, Denis Cherednichenko, Maxim Kireev
{"title":"Benchmarking resting state fMRI connectivity pipelines for classification: robust accuracy despite processing variability in cross-site eye state prediction.","authors":"Tatiana Medvedeva, Irina Knyazeva, Ruslan Masharipov, Alexander Korotkov, Denis Cherednichenko, Maxim Kireev","doi":"10.1186/s40708-026-00305-1","DOIUrl":"https://doi.org/10.1186/s40708-026-00305-1","url":null,"abstract":"<p><p>The rapid evolution of machine learning (ML) methods has yielded promising results in human brain neuroscience. However, the reproducibility of ML applications in neuroimaging remains limited, challenging the generalizability of inferences to broader populations. In addition to the inherent variability of the brain activity (both in healthy and pathological states), poor reproducibility is further enhanced by inconsistencies in data preprocessing techniques and methods for calculating functional connectivity (FC), which are used as parameters for brain state classification. To systematically assess the impact of abovementioned factors on ML applications to fMRI data, we benchmarked a comprehensive set of FC analysis pipelines for the classification task between fMRI data recorded in two fundamentally different states: eyes open and eyes closed. In contrast to studies involving heterogeneous clinical populations or using complex cognitive tasks, our controlled experimental design - based on two independent datasets of healthy participants collected in different laboratories - minimizes variability related to a task design or pathological brain states. Classification accuracy and reproducibility were compared for 256 distinct FC analysis pipelines, covering common preprocessing approaches, brain parcellation schemes, and connectivity metrics. Notably, we employed two ways of validation: a direct cross-site validation strategy - when a model was trained on one site and tested on another, and few-shot domain adaptation - when a few samples of testing site were added to the train set. Despite the substantial variability in pipeline configurations, we observed consistently high classification accuracy (~ 90%), confirming that FC-based models can robustly discriminate between well-defined brain states (eye conditions) across different acquisition sites. Best results both in terms of classification accuracy and stability were observed using Pearson correlation and tangent space parametrization as FC, Brainnetome as atlas, and confound regression strategies based on the CompCor method. These findings highlight the resilience of rs-fMRI FC-derived characteristics to methodological variation and support their utility in the discovery of biomarkers, particularly in settings that involve stable and reproducible brain states.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147844006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep-learning framework for brain tumor segmentation via three-dimensional mass-preserving geometric transformation. 基于三维保质量几何变换的脑肿瘤分割深度学习框架。
IF 4.5
Brain Informatics Pub Date : 2026-05-05 DOI: 10.1186/s40708-026-00307-z
Tsung-Ming Huang, Kai-Qian Zheng, Wen-Wei Lin, Tiexiang Li, Shing-Tung Yau
{"title":"A deep-learning framework for brain tumor segmentation via three-dimensional mass-preserving geometric transformation.","authors":"Tsung-Ming Huang, Kai-Qian Zheng, Wen-Wei Lin, Tiexiang Li, Shing-Tung Yau","doi":"10.1186/s40708-026-00307-z","DOIUrl":"10.1186/s40708-026-00307-z","url":null,"abstract":"<p><p>This article presents a robust and efficient framework for brain tumor segmentation based on deep learning. We introduce a novel three-dimensional (3D) mass-preserving geometric transformation (MPGT) that employs a homotopy method to transform irregular brain magnetic resonance (MR) images into standardized solid cubes. This transformation preserves local mass ratios while maintaining global structural integrity, providing a structured input for deep learning models. Furthermore, we propose a modified two-phase segmentation strategy to minimize inference time and a postprocessing technique to enhance lesion-wise performance. Extensive validation on the Brain Tumor Segmentation (BraTS) Challenge 2023 dataset demonstrates that our method, when integrated with nnU-Net, achieves competitive Dice scores of 0.9282 (Whole Tumor), 0.8812 (Tumor Core), and 0.8527 (Enhanced Tumor). These results are superior to or comparable with top-ranking competition entries.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147843986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Brain network classification considering directed propagation mechanisms of dynamic graphs. 考虑动态图有向传播机制的脑网络分类。
IF 4.5
Brain Informatics Pub Date : 2026-05-02 DOI: 10.1186/s40708-026-00306-0
Xinlei Wang, Zhongyang Wang, Keyan Cao
{"title":"Brain network classification considering directed propagation mechanisms of dynamic graphs.","authors":"Xinlei Wang, Zhongyang Wang, Keyan Cao","doi":"10.1186/s40708-026-00306-0","DOIUrl":"10.1186/s40708-026-00306-0","url":null,"abstract":"<p><p>The classification of functional brain networks plays an important role in the diagnosis of neurodegenerative diseases, brain decoding and other fields. Functional brain networks can effectively reflect the functional connection relationships between brain regions or neurons and accurately represent brain activities. Therefore, a large number of problems related to the classification of functional brain networks have been studied. However, the traditional functional brain network merely measures the static correlation between brain regions or neurons in a simple way, and does not reflect the causal transmission effect between brain regions. This directionality is crucial for the regulatory relationship between brain regions. Furthermore, since the brain is constantly in a state of dynamic change, the dynamics of functional connectivity also plays a very important role in the classification of functional brain networks. Therefore, we propose a classification framework named Dynamic Directed Propagation Networks (DDPN) for functional brain networks considering the dynamic directed propagation mechanism. This method effectively captures the dynamics and directionality of the dynamic directed brain network and further improves the classification accuracy of the functional brain network. To verify the effectiveness of the proposed method, we conduct experiments on real datasets. The experiments show that the proposed method improved by 3.1-4.1% compared with state-of-the art methods in two datasets.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147821770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BrainFusionNet: a deep learning and XAI model to understand local, global, and sequential features of MRI images for improved brain tumour detection. BrainFusionNet:一种深度学习和XAI模型,用于理解MRI图像的局部、全局和序列特征,以改进脑肿瘤检测。
IF 4.5
Brain Informatics Pub Date : 2026-04-28 DOI: 10.1186/s40708-026-00303-3
Md Taimur Ahad, Bo Song, Yan Li
{"title":"BrainFusionNet: a deep learning and XAI model to understand local, global, and sequential features of MRI images for improved brain tumour detection.","authors":"Md Taimur Ahad, Bo Song, Yan Li","doi":"10.1186/s40708-026-00303-3","DOIUrl":"https://doi.org/10.1186/s40708-026-00303-3","url":null,"abstract":"<p><p>The noise of Magnetic Resonance Imaging (MRI) poses challenges for Deep Learning (DL) when tumor boundaries are obscured, tumor location and appearance are complex due to overlap between tumor and non-tumor cells, and modality identification is difficult because tumor features vanish in the later layers of the DL. Effective feature extraction from given MRI is a possible solution to overcome this challenge. Therefore, we develop BrainFusionNet that combines Convolutional Neural Networks (CNNs), Vision Transformers (ViT), and Gated Recurrent Units (GRUs) to extract spatial, contextual, and sequential features from MRI images for improved brain tumor classification. Furthermore, explainable AI such as SHAP, LIME, and Grad-CAM are integrated to visualise and highlight image regions that contribute to BrainFusionNet's decision-making process. The proposed BrainFusionNet model is evaluated on two publicly available MRI datasets. K-fold validation suggests 98% accuracy on both datasets. The model was compared with the six state-of-the-art (SOTA) CNNs and transfer learning. Among the SOTA CNNs, DenseNet121 and VGG16 achieved the highest accuracy of 96%. The novelty of BrainFusionNet is that the hybrid model effectively extracts local and global features from MRI images, even in small-scale tumor regions and small tumor sizes. The model has a balanced sequential CNN architecture to capture low-level and deeper-layer features; a customized ViT that captures local features, stabilizes gradient flow, and reduces the risk of vanishing gradients during MRI image training. The CNN and ViT outputs are fed into a GRU for final classification. Furthermore, we analyze pixel intensities to determine whether MRI image quality affects image classification. Our findings are very novel in image interpretation, as we found that the distribution of pixel intensities in MRI images affects DL performance.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-04-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147784077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Anatomical-connectivity-guided functional connectivity reveals task-relevant pathways during proactive task-switching via recurrent graph neural networks. 解剖连接引导的功能连接通过循环图神经网络揭示了主动任务切换过程中的任务相关途径。
IF 4.5
Brain Informatics Pub Date : 2026-04-26 DOI: 10.1186/s40708-026-00300-6
Siyu Wang, Atsushi Miyata, Teruhisa Okuya, Hiroto Yanagawa, Ayaka Sakaki, Natsuhiro Ichinose, Takatsune Kumada
{"title":"Anatomical-connectivity-guided functional connectivity reveals task-relevant pathways during proactive task-switching via recurrent graph neural networks.","authors":"Siyu Wang, Atsushi Miyata, Teruhisa Okuya, Hiroto Yanagawa, Ayaka Sakaki, Natsuhiro Ichinose, Takatsune Kumada","doi":"10.1186/s40708-026-00300-6","DOIUrl":"https://doi.org/10.1186/s40708-026-00300-6","url":null,"abstract":"","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-04-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147784106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SegAnyNeuron: a neural image segmentation network with strong generalization performance by modeling image intensity variation. SegAnyNeuron:一种通过对图像强度变化建模,具有较强泛化性能的神经图像分割网络。
IF 4.5
Brain Informatics Pub Date : 2026-04-17 DOI: 10.1186/s40708-026-00298-x
Lin Cai, Ying Zhang, Quanwei Ding, Xiaojun Wang, Pei Sun, Shaoqun Zeng, Tingwei Quan
{"title":"SegAnyNeuron: a neural image segmentation network with strong generalization performance by modeling image intensity variation.","authors":"Lin Cai, Ying Zhang, Quanwei Ding, Xiaojun Wang, Pei Sun, Shaoqun Zeng, Tingwei Quan","doi":"10.1186/s40708-026-00298-x","DOIUrl":"10.1186/s40708-026-00298-x","url":null,"abstract":"","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13103207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147718403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pipeline evaluation of a state-of-the-art AI algorithm for detection of focal cortical dysplasia: insights into potential failure sources. 一种用于检测局灶性皮质发育不良的最先进人工智能算法的流水线评估:对潜在失败来源的见解。
IF 4.5
Brain Informatics Pub Date : 2026-04-03 DOI: 10.1186/s40708-026-00299-w
Mateus A Esmeraldo, Stefanie Chambers, Yanniklas Kravutske, Eduardo P Reis, Gregor Kasprian, Ana Filipa Geraldo, Sergios Gatidis, Bruno P Soares
{"title":"Pipeline evaluation of a state-of-the-art AI algorithm for detection of focal cortical dysplasia: insights into potential failure sources.","authors":"Mateus A Esmeraldo, Stefanie Chambers, Yanniklas Kravutske, Eduardo P Reis, Gregor Kasprian, Ana Filipa Geraldo, Sergios Gatidis, Bruno P Soares","doi":"10.1186/s40708-026-00299-w","DOIUrl":"10.1186/s40708-026-00299-w","url":null,"abstract":"<p><strong>Purpose: </strong>MELD Graph is a state-of-the-art artificial intelligence (AI) model for automated detection of focal cortical dysplasia (FCD), but its performance remains limited, highlighting the need to investigate which aspects of the pipeline affect its accuracy.</p><p><strong>Methods: </strong>A retrospective failure-mode analysis of the MELD Graph pipeline was performed in 242 subjects, with model predictions and FreeSurfer segmentations reviewed to classify errors as segmentation-associated or algorithm-related. FCD imaging features salient to humans were quantified, with statistical associations examined for both MELD Graph detection and focal FreeSurfer segmentation failure.</p><p><strong>Results: </strong>MELD Graph demonstrated overall performance similar to previously published non-harmonized results, achieving a sensitivity of 69%, specificity of 44%, and positive predictive value (PPV) of 75%. Focal FreeSurfer segmentation failures were associated with 21% of false negative patients, 25% of false positive clusters in patients, and 16% of false positive clusters in controls. Following manual cortical segmentation correction and rerunning of MELD Graph, 67% of the segmentation-associated missed lesions were detected, and segmentation-associated false positive clusters were reduced or eliminated in 75% of controls with such clusters. Higher conspicuity on T1-weighted images was associated with MELD Graph detection, whereas greater conspicuity on T2-FLAIR images relative to T1 was associated with detection failure. Non-bottom-of-sulcus lesion location, higher human conspicuity measures, and low T1 image quality were positively associated with focal FreeSurfer segmentation failures.</p><p><strong>Conclusion: </strong>FreeSurfer segmentation failures are a significant potential source of error in the MELD Graph pipeline. FCD imaging features salient to humans and image quality were also associated with variability in algorithm performance. Robust cortical segmentation and stronger integration of T2-FLAIR imaging features may be beneficial for automated FCD detection tools.</p><p><strong>Clinical trial registration: </strong>Not applicable. This study is a retrospective analysis of previously acquired open-source imaging datasets and does not constitute a clinical trial.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133319/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147610148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Filter bank CSP with Riemannian weighting for disability-centric motor imagery brain computer interface. 以残障为中心的运动影像脑机接口黎曼加权滤波组CSP。
IF 4.5
Brain Informatics Pub Date : 2026-03-26 DOI: 10.1186/s40708-026-00295-0
Souissi Jihen, Sourour Karmani, Kais Belwafi, Mahdi Jemmali, Ridha Djemal
{"title":"Filter bank CSP with Riemannian weighting for disability-centric motor imagery brain computer interface.","authors":"Souissi Jihen, Sourour Karmani, Kais Belwafi, Mahdi Jemmali, Ridha Djemal","doi":"10.1186/s40708-026-00295-0","DOIUrl":"10.1186/s40708-026-00295-0","url":null,"abstract":"<p><p>Brain-computer interfaces (BCIs) were initially created to help individuals with disabilities control devices and communicate without muscle movement. Today, BCIs are used for prosthetic control, cognitive enhancement, and neurological rehabilitation. The BCI system depends on analyzing electroencephalogram (EEG) signals captured from the brain. Decoding these EEG signals is a complex process that combines multiple algorithms to extract meaningful information from these intricate and noisy signals. One of the most popular techniques is the Common Spatial Patterns (CSP), which helps preserve useful and sensitive information. This paper presents an optimized extension of the CSP model for extracting EEG data features in a multiclass setting using Riemannian geometry-based weighting. The use of weighting based on Riemannian geometry enhances the robustness of covariance matrix computation, thereby decreasing the influence of noise that can significantly distort the mean of covariance matrices in the traditional CSP method. The proposed approach is also extended by the integration of a multi-band filter bank, providing a more detailed examination of EEG signals. Three classifiers, Linear Discriminant Analysis (LDA), Random Forest Classifier (RFC), and Multi-Layer Perceptron (MLP), are employed to differentiate features across four motor imagery tasks. LDA achieves an accuracy of 80.40%, while MLP and RFC reach 80.02% and 80.90%, respectively. The results obtained using a majority vote combining the decisions of the three classifiers are 81.83% for accuracy and Recall, 82.74% for precision, and 81.87% for F1-score. The proposed architecture is evaluated using the BCI Competition IV set 2a dataset, proving its effectiveness in EEG signal classification for BCI applications.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13133326/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147522016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
U-Net-based transfer learning for automated tumour segmentation enabling fully automated [18F]F-DOPA PET analysis in paediatric gliomas. 基于u - net的自动肿瘤分割迁移学习,实现全自动[18F]儿童胶质瘤的F-DOPA PET分析。
IF 4.5
Brain Informatics Pub Date : 2026-03-25 DOI: 10.1186/s40708-026-00296-z
Michele Mureddu, Rosella Trò, Federico Giovanni Garau, Nicolò Trebino, Andrea Bianconi, Andrea Rossi, Antonia Ramaglia, Antonio Verrico, Claudia Milanaccio, Giovanni Morana, Massimiliano Iacozzi, Francesco Fiz, Arnoldo Piccardo, Marco Massimo Fato
{"title":"U-Net-based transfer learning for automated tumour segmentation enabling fully automated [<sup>18</sup>F]F-DOPA PET analysis in paediatric gliomas.","authors":"Michele Mureddu, Rosella Trò, Federico Giovanni Garau, Nicolò Trebino, Andrea Bianconi, Andrea Rossi, Antonia Ramaglia, Antonio Verrico, Claudia Milanaccio, Giovanni Morana, Massimiliano Iacozzi, Francesco Fiz, Arnoldo Piccardo, Marco Massimo Fato","doi":"10.1186/s40708-026-00296-z","DOIUrl":"10.1186/s40708-026-00296-z","url":null,"abstract":"<p><strong>Background: </strong>PET imaging with [<sup>18</sup>F]F-DOPA shows great promise for assessing paediatric gliomas. Manual tumour delineation and parameter extraction are time-consuming and prone to inter-operator variability.</p><p><strong>Methods: </strong>We evaluated whether a deep learning model, leveraging transfer learning from adult glioma datasets, could enable a fully automated pipeline for tumour segmentation and PET parameter extraction. Static and dynamic parameters were compared across three approaches: (i) automatic vs semi-automatic, (ii) automatic vs manual, and (iii) manual vs. semi-automatic. Data from 103 paediatric patients (median age 11 years; 54 females, 49 males) with static and/or dynamic [<sup>18</sup>F]F-DOPA PET scans (2011-2024) were retrospectively included for fine-tuning the deep learning model. Statistical and survival analyses were performed on 90 subjects; dynamic analysis included 32 patients.</p><p><strong>Results: </strong> The best model achieved a Dice score of 0.82 ± 0.11 and was integrated into the pipeline for extracting static and dynamic indices. Automatic Tumour-to-Striatum ratio showed high reproducibility across comparisons ((i) p = 0.660, (ii) p = 0.342, (iii) p = 0.639), while Tumour-to-Background differed significantly when comparing manual delineations (p < 0.01). Dynamic parameters demonstrated good reproducibility with the automatic method (p > 0.05). Importantly, both automated static indices correlate significantly with tumour grade, with the overall and progression-free survival (p < 0.05).</p><p><strong>Conclusions: </strong> Transfer learning enabled a fully automatic [<sup>18</sup>F]F-DOPA PET pipeline for paediatric gliomas, providing reproducible static and dynamic parameter extraction and correlating with clinically relevant outcomes. This approach reduces operator dependence and streamlines analysis, supporting potential integration into routine clinical practice.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":" ","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13083549/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147515416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perivascular fluid clearance links choroid plexus changes to cognitive performance in cerebral small vessel disease. 脑血管疾病患者脉络膜丛改变与认知能力的关系
IF 4.5
Brain Informatics Pub Date : 2026-03-24 DOI: 10.1186/s40708-026-00297-y
Dan Luo, Bin Yang, Lisha Nie, Peng Zeng, Bang Zeng, Binglan Li, Xiaojuan Dong, Tianyou Luo, Yongmei Li
{"title":"Perivascular fluid clearance links choroid plexus changes to cognitive performance in cerebral small vessel disease.","authors":"Dan Luo, Bin Yang, Lisha Nie, Peng Zeng, Bang Zeng, Binglan Li, Xiaojuan Dong, Tianyou Luo, Yongmei Li","doi":"10.1186/s40708-026-00297-y","DOIUrl":"10.1186/s40708-026-00297-y","url":null,"abstract":"<p><p>Choroid plexus (CP) dysfunction may impair cerebrospinal fluid (CSF) turnover and perivascular glymphatic clearance, but whether CP microstructural injury correlates to cerebral small vessel disease (CSVD)-related brain alteration and cognitive performance remains unclear. We investigated whether imaging markers of perivascular fluid transport mediate the associations between CP alterations and CSVD pathology. CP microstructure was assessed by mean apparent propagator (MAP) diffusion imaging and χ-separation susceptibility mapping in 139 CSVD patients and 52 healthy controls. Perivascular clearance measures included the diffusion tensor image-along perivascular space (DTI-ALPS) index, basal ganglia free-water fraction (FW-BG), and perivascular space volume fraction (PVSVF-BG). Compared with controls, CSVD patients showed CP microstructural abnormalities and altered perivascular clearance markers, which was identified by random-forest model as predictors of CSVD severity (OOB-AUC = 0.755, 95%CI: 0.641-0.836). Mediation analysis revealed that imaging markers of perivascular fluid clearance significantly mediated the associations between CP alterations and structural brain injury (average causal mediation effect , ACME=-110.118 to 80.121, FDR-p < 0.05). Notably, PVSVF-BG was the only glymphatic imaging metric mediating the links of both CP volume and susceptibility with cognitive performance (ACME=-142.474 to 67.351, FDR-p < 0.05). These findings indicate that CP microstructural injury are present in mild CSVD with the absence of CP volumetric enlargement, and are associated with CSVD markers and cognitive performance, which are mediated by disrupted perivascular fluid transport, highlighting the CP-glymphatic axis as a candidate pathway for further longitudinal and mechanistic investigation.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"13 1","pages":""},"PeriodicalIF":4.5,"publicationDate":"2026-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC13018506/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147515429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书