{"title":"Current status and challenges in electroencephalography (EEG)-based driver fatigue detection: a comprehensive survey.","authors":"Jahid Hassan, Shekh Naziullah, Mamunur Rashid, Thamina Islam, Md Nahidul Islam, Md Shofiqul Islam, Shoyeb Mahmud","doi":"10.1007/s11571-025-10320-3","DOIUrl":"10.1007/s11571-025-10320-3","url":null,"abstract":"<p><p>Driver fatigue is a major contributor to traffic accidents, leading to increased fatality rates and severe damage compared to incidents involving alert drivers. Electroencephalography (EEG) has emerged as a widely used method for detecting driver fatigue due to its ability to capture brain activity patterns. This survey provides a thorough analysis of devices that detect driver fatigue using EEG, analyzing existing methodologies, challenges, and future research directions. This study was carried out according to PRISMA criteria. Relevant studies were retrieved from SpringerLink, Web of Science, IEEE Xplore, Scopus, and ScienceDirect, covering research published until February 16, 2025. After 267 publications were identified, 87 scientific papers were fully analyzed based on their relevance and contribution to the identification of driver fatigue using EEG. The review explores the article selection process, followed by an in-depth discussion of driver fatigue detection systems across various domains. Applications of Machine Learning (ML) in EEG-based fatigue evaluation are carefully reviewed, covering data collection, preliminary processing, feature extraction, categorization techniques, and performance assessment. Additionally, a comparative evaluation of cutting-edge research provides a comprehensive visualization of current research trends. This survey highlights the advantages, limitations, and future prospects of EEG-based driver fatigue detection, offering valuable insights for improving road safety. The findings contribute to the development of more reliable and real-time fatigue detection systems by addressing existing challenges and recommending potential solutions.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"142"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12401835/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144991610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive NeurodynamicsPub Date : 2025-12-01Epub Date: 2025-02-05DOI: 10.1007/s11571-025-10221-5
Gaoxuan Li, Bo Chen, Weigang Sun, Zhenbing Liu
{"title":"A stacking classifier for distinguishing stages of Alzheimer's disease from a subnetwork perspective.","authors":"Gaoxuan Li, Bo Chen, Weigang Sun, Zhenbing Liu","doi":"10.1007/s11571-025-10221-5","DOIUrl":"10.1007/s11571-025-10221-5","url":null,"abstract":"<p><p>Accurately distinguishing stages of Alzheimer's disease (AD) is crucial for diagnosis and treatment. In this paper, we introduce a stacking classifier method that combines six single classifiers into a stacking classifier. Using brain network models and network metrics, we employ <i>t</i>-tests to identify abnormal brain regions, from which we construct a subnetwork and extract its features to form the training dataset. Our method is then applied to the ADNI (Alzheimer's Disease Neuroimaging Initiative) datasets, categorizing the stages into four categories: Alzheimer's disease, mild cognitive impairment (MCI), mixed Alzheimer's mild cognitive impairment (ADMCI), and healthy controls (HCs). We investigate four classification groups: AD-HCs, AD-MCI, HCs-ADMCI, and HCs-MCI. Finally, we compare the classification accuracy between a single classifier and our stacking classifier, demonstrating superior accuracy with our stacking classifier from a subnetwork-based viewpoint.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"38"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143381814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TCANet: a temporal convolutional attention network for motor imagery EEG decoding.","authors":"Wei Zhao, Haodong Lu, Baocan Zhang, Xinwang Zheng, Wenfeng Wang, Haifeng Zhou","doi":"10.1007/s11571-025-10275-5","DOIUrl":"10.1007/s11571-025-10275-5","url":null,"abstract":"<p><p>Decoding motor imagery electroencephalogram (MI-EEG) signals is fundamental to the development of brain-computer interface (BCI) systems. However, robust decoding remains a challenge due to the inherent complexity and variability of MI-EEG signals. This study proposes the Temporal Convolutional Attention Network (TCANet), a novel end-to-end model that hierarchically captures spatiotemporal dependencies by progressively integrating local, fused, and global features. Specifically, TCANet employs a multi-scale convolutional module to extract local spatiotemporal representations across multiple temporal resolutions. A temporal convolutional module then fuses and compresses these multi-scale features while modeling both short- and long-term dependencies. Subsequently, a stacked multi-head self-attention mechanism refines the global representations, followed by a fully connected layer that performs MI-EEG classification. The proposed model was systematically evaluated on the BCI IV-2a and IV-2b datasets under both subject-dependent and subject-independent settings. In subject-dependent classification, TCANet achieved accuracies of 83.06% and 88.52% on BCI IV-2a and IV-2b respectively, with corresponding Kappa values of 0.7742 and 0.7703, outperforming multiple representative baselines. In the more challenging subject-independent setting, TCANet achieved competitive performance on IV-2a and demonstrated potential for improvement on IV-2b. The code is available at https://github.com/snailpt/TCANet.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"91"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12167204/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144309661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive NeurodynamicsPub Date : 2025-12-01Epub Date: 2025-06-24DOI: 10.1007/s11571-025-10283-5
Xuefen Lin, Linhui Fan, Yifan Gu, Zhixian Wu
{"title":"Emotion recognition framework based on adaptive window selection and CA-KAN.","authors":"Xuefen Lin, Linhui Fan, Yifan Gu, Zhixian Wu","doi":"10.1007/s11571-025-10283-5","DOIUrl":"10.1007/s11571-025-10283-5","url":null,"abstract":"<p><p>In recent years, emotion recognition, particularly EEG-based emotion recognition, has found widespread application across various domains. Enhancing EEG data processing and emotion recognition models remains a key research focus in this field. This paper presents an emotion recognition framework combining the CUSUM algorithm-based adaptive window selection technique with the convolutional attention-enhanced Kolmogorov-Arnold Networks (CA-KAN). The improved CUSUM algorithm effectively extracts the most emotion-relevant segments from raw EEG data. Furthermore, by enhancing the KAN network, the CA-KAN model achieves both high accuracy and efficiency in emotion recognition. The proposed framework achieved peak classification accuracies of 94.63% and 94.73% on the SEED and SEED-IV datasets, respectively. Additionally, the framework offers a lightweight advantage, demonstrating significant potential for real-world applications, including medical emotion monitoring and driver emotion detection.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"100"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12187633/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144504983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive NeurodynamicsPub Date : 2025-12-01Epub Date: 2025-08-19DOI: 10.1007/s11571-025-10315-0
Kuo-Shou Chiu, Jyh-Cheng Jeng, Tongxing Li, Fernando Córdova-Lepe
{"title":"Global exponential stability of periodic solutions for Cohen-Grossberg neural networks involving generalized piecewise constant delay.","authors":"Kuo-Shou Chiu, Jyh-Cheng Jeng, Tongxing Li, Fernando Córdova-Lepe","doi":"10.1007/s11571-025-10315-0","DOIUrl":"10.1007/s11571-025-10315-0","url":null,"abstract":"<p><p>This paper investigates the global exponential stability and periodicity of the Cohen-Grossberg neural network model with generalized piecewise constant delay. By applying Schaefer's fixed-point theorem, a sufficient condition for the existence of periodic solutions in the model is established. Additionally, by constructing appropriate differential inequalities with generalized piecewise constant delay, sufficient conditions for the global exponential stability of the model are obtained. Finally, computer simulations are conducted to illustrate a globally exponentially stable periodic Cohen-Grossberg neural network model, thereby confirming the feasibility and effectiveness of the proposed results.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"129"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12364798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144945566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive NeurodynamicsPub Date : 2025-12-01Epub Date: 2025-06-17DOI: 10.1007/s11571-025-10285-3
Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan
{"title":"Visual statistical learning based on a coupled shape-position recurrent neural network model.","authors":"Baolong Sun, Yihong Wang, Xuying Xu, Xiaochuan Pan","doi":"10.1007/s11571-025-10285-3","DOIUrl":"10.1007/s11571-025-10285-3","url":null,"abstract":"<p><p>The visual system has the ability to learn the statistical regularities (temporal and/or spatial) that characterize the visual scene automatically and implicitly. This ability is referred to as the visual statistical learning (VSL). The VSL could group several objects that have fixed statistical properties into a chunk. This complex process relies on the collaborative involvement of multiple brain regions that work together to learn the chunk. Although behavioral experiments have explored cognitive functions of the VSL, its computational mechanisms remain poorly understood. To address this issue, this study proposes a coupled shape-position recurrent neural network model based on the anatomical structure of the visual system to explain how chunk information is learned and represented in neural networks. The model comprises three core modules: the position network, which encodes object position information; the shape network, which encodes object shape information; and the decision network, which integrates the neuronal activity in the position and shape networks to make decisions. The model successfully simulates the results of a classic spatial VSL experiment. The distribution of neural firing rates in the decision network shows a significant difference between chunk and non-chunk conditions. Specifically, these neurons in the chunk condition exhibit stronger firing rates than those in the non-chunk condition. Furthermore, after the model learns a scene containing both chunk and non-chunk stimuli, neurons in the position network selectively encode far and near stimuli, respectively. In contrast, neurons in the shape network distinguish between chunk and non-chunk. The chunk encoding neurons selectively respond to specific chunks. These results indicate that the proposed model is able to learn spatial regularities of the stimuli to discriminate chunks from non-chunks, and neurons in the shape network selectively respond to chuck and non-chunk information. These findings offer important theoretical insights into the representation mechanisms of chunk information in neural networks and propose a new framework for modeling spatial VSL.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"96"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12174023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144332590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive NeurodynamicsPub Date : 2025-12-01Epub Date: 2025-09-20DOI: 10.1007/s11571-025-10337-8
Rashmi Mishra, R K Agrawal, Jyoti Singh Kirar
{"title":"Msst-eegnet: multi-scale spatio-temporal feature extraction using inception and temporal pyramid pooling for motor imagery classification.","authors":"Rashmi Mishra, R K Agrawal, Jyoti Singh Kirar","doi":"10.1007/s11571-025-10337-8","DOIUrl":"https://doi.org/10.1007/s11571-025-10337-8","url":null,"abstract":"<p><p>Motor imagery classification is an essential component of Brain-computer interface systems to interpret and recognize brain signals generated during the visualization of motor imagery tasks by a subject. The objective of this work is to develop a novel DL model to extract discriminative features for better generalization performance to recognize motor imagery tasks. This paper presents a novel Multi-scale spatio-temporal network (MSST-EEGNet) to extract discriminative temporal, spectral, and spatial features for motor imagery task classification. The proposed MSST-EEGNet model includes three modules namely the inception module with dilated convolution, the temporal pyramid pooling module, and the classification module. Multi-scale temporal features along with spatial features are extracted using the inception block with the dilated convolution module. A set of multi-level fine-grained and coarse-grained features are extracted using a temporal pyramid pooling module. Further, categorical cross-entropy in combination with center loss is used as a loss function. Experiments are carried out on three benchmark datasets including the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset. The evaluation results shows that the proposed MSST-EEGNet model outperforms eight existing DL models in terms of classification accuracy for subject-specific and cross-session settings. It also outperforms eight existing DL models and six existing transfer-learning models for cross-subject setting. For the subject-specific classification the proposed MSST-EEGNet model achieved an accuracy of 0.8426 ± 0.1061, 0.7779 ± 0.0938, and 0.7365 ± 0.1477 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-session setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7709 ± 0.1098, 0.7524 ± 0.1017, and 0.6860 ± 0.0990 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. For the cross-subject setting, the proposed MSST-EEGNet model achieved an accuracy of 0.7288 ± 0.0730, 0.8161 ± 0.963, and 0.7075 ± 0.0746 on the BCI Competition IV-2a dataset, the BCI Competition IV-2b dataset, and the OpenBMI dataset respectively. Furthermore, a non-parametric Friedman statistical test demonstrates statistically significant superior performance of the proposed MSST-EEGNet model over the existing models.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"150"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12450197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145124262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive NeurodynamicsPub Date : 2025-12-01Epub Date: 2025-09-27DOI: 10.1007/s11571-025-10318-x
Tao Liang, Junxiao Yu, Keke Shi, Yihao Yao, Jie Li, Bin Liu, Wei Wang, Chengyu Liu, Liangcheng Qu, Kuiying Yin, Wentao Xiang, Jianqing Li
{"title":"Construction and evaluation of an emotion-inducing video dataset towards Chinese elderly healthy controls and individuals with mild cognitive impairment.","authors":"Tao Liang, Junxiao Yu, Keke Shi, Yihao Yao, Jie Li, Bin Liu, Wei Wang, Chengyu Liu, Liangcheng Qu, Kuiying Yin, Wentao Xiang, Jianqing Li","doi":"10.1007/s11571-025-10318-x","DOIUrl":"https://doi.org/10.1007/s11571-025-10318-x","url":null,"abstract":"<p><p>This work aimed to develop and validate an emotion-inducing video dataset for the Chinese elderly. The dataset was constructed by video collection, psychological evaluation, and elderly examination. 18 videos across six emotions (neutrality, sadness, anger, happiness, boredom, and tension) were selected for emotional induction. The effectiveness of the dataset was evaluated in 37 subjects, with two groups, 21 healthy controls (HC group) and 16 individuals with mild cognitive impairment (MCI group), who were assessed in a three-session experiment. Each session comprised one pretest and six emotion-inducing videos. The electrocardiogram (ECG) and electroencephalography (EEG) signals were synchronously recorded. After viewing each video, the subjects provided self-reports of discrete emotion labels, valence, and arousal scores using a modified Self-Assessment Manikin scale. Discrete emotion analysis, valence/arousal analysis, and ECG feature analysis were conducted by the ANOVA method. EEG feature analysis was assessed with a linear mixed-effects model. Discrete emotion analysis confirmed that happiness and sadness induced by the dataset show high agreement rates (e.g., happiness: HC 0.79, MCI 0.85 and sadness: HC 0.81, MCI 0.71), whereas boredom (HC 0.38, MCI 0.29) showed a comparatively lower consistency. Valence/arousal analysis revealed significant group differences for tension and boredom emotions. ECG feature analysis revealed significant differences in the baseline-normalized mean heart rate between HC and MCI groups in specific sessions. EEG feature analysis revealed that the MCI group exhibited higher relative band power values than did the HC group in the <math><mi>δ</mi></math> and <math><mi>θ</mi></math> bands.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10318-x.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"154"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12476350/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145191080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-patient seizure prediction via continuous domain adaptation and similar sample replay.","authors":"Ziye Zhang, Aiping Liu, Yikai Gao, Ruobing Qian, Xun Chen","doi":"10.1007/s11571-024-10216-8","DOIUrl":"10.1007/s11571-024-10216-8","url":null,"abstract":"<p><p>Seizure prediction based on electroencephalogram (EEG) for people with epilepsy, a common brain disorder worldwide, has great potential for life quality improvement. To alleviate the high degree of heterogeneity among patients, several works have attempted to learn common seizure feature distributions based on the idea of domain adaptation to enhance the generalization ability of the model. However, existing methods ignore the inherent inter-patient discrepancy within the source patients, resulting in disjointed distributions that impede effective domain alignment. To eliminate this effect, we introduce the concept of multi-source domain adaptation (MSDA), considering each source patient as a separate domain. To avoid additional model complexity from MSDA, we propose a continuous domain adaptation approach for seizure prediction based on the convolutional neural network (CNN), which performs sequential training on multiple source domains. To relieve the model catastrophic forgetting during sequential training, we replay similar samples from each source domain, while learning common feature representations based on subdomain alignment. Evaluated on a publicly available epilepsy dataset, our proposed method attains a sensitivity of 85.0% and a false alarm rate (FPR) of 0.224/h. Compared to the prevailing domain adaptation paradigm and existing domain adaptation works in the field, the proposed method can efficiently capture the knowledge of different patients, extract better common seizure representations, and achieve state-of-the-art performance.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"26"},"PeriodicalIF":3.1,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11735696/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143001017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of physical exercise on cognitive and motor function in patients with Alzheimer's disease: a meta-analysis based on randomized controlled trials.","authors":"Yuxin Gai, Xuelian Dai, Mengyi Qian, Guojian Lin, Piaorou Pan, Tianfu Dai, Yuedan Luo, Lijing Su","doi":"10.1007/s11571-025-10326-x","DOIUrl":"10.1007/s11571-025-10326-x","url":null,"abstract":"<p><p>This study investigated the effects of physical activity on cognitive and motor function in Alzheimer's disease patients. This study searched randomized controlled trials (RCTs) from PubMed, EMBASE, Science Direct, and Web of Science databases up to October 2024. The main evaluation tools were Mini-Mental State Examination (MMSE), Timed Up and Go Test (TUG), 6-Minute walk test (6MWT) and Alzheimer's Disease Assessment Scale-cognitive subscale (ADAS-cog). Mean difference (MD) with 95% confidence interval (CI) were calculated. A total of 25 randomized controlled trials involving 2213 participants were included. The MMSE score in exercise group was higher than that in control group (MD = 2.24, <i>p</i> = 0.002). Aerobic exercise (MD = 2.83, <i>p</i> = 0.01) and combined exercise (MD = 3.09, <i>p</i> = 0.03) in exercise group were significantly better than those in control group. There was no significant difference in strength exercise between the two groups (MD = 0.54, <i>p</i> = 0.48). At low intensity (MD = 5.75, <i>p</i> < 0.001) and moderate intensity (MD = 1.74, <i>p</i> = 0.008), MMSE scores in the exercise group were higher than those in the control group, whereas high-intensity exercise showed no benefit (MD = 0, <i>p</i> = 0.99). On the 6MWT scale, aerobic exercise scores were higher in the exercise group (MD = 51.55, <i>p</i> = 0.03), while there was no significant difference between the two groups under combined exercise (MD = 62.76, <i>p</i> = 0.45). The TUG scale (MD = -0.76, <i>p</i> = 0.06) and the ADAS-cog scale (MD = -1.99, <i>p</i> = 0.23) showed no significant difference between the two groups. Low intensity aerobic exercise improved cognitive and motor function in Alzheimer's disease patients, while strength exercise or high-impact exercise had little effect.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s11571-025-10326-x.</p>","PeriodicalId":10500,"journal":{"name":"Cognitive Neurodynamics","volume":"19 1","pages":"133"},"PeriodicalIF":3.9,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12373596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144945540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}