Parsa Khalafi, Soroush Morsali, Sana Hamidi, Hamidreza Ashayeri, Navid Sobhi, Siamak Pedrammehr, Ali Jafarizadeh
{"title":"Artificial intelligence in stroke risk assessment and management via retinal imaging.","authors":"Parsa Khalafi, Soroush Morsali, Sana Hamidi, Hamidreza Ashayeri, Navid Sobhi, Siamak Pedrammehr, Ali Jafarizadeh","doi":"10.3389/fncom.2025.1490603","DOIUrl":"10.3389/fncom.2025.1490603","url":null,"abstract":"<p><p>Retinal imaging, used for assessing stroke-related retinal changes, is a non-invasive and cost-effective method that can be enhanced by machine learning and deep learning algorithms, showing promise in early disease detection, severity grading, and prognostic evaluation in stroke patients. This review explores the role of artificial intelligence (AI) in stroke patient care, focusing on retinal imaging integration into clinical workflows. Retinal imaging has revealed several microvascular changes, including a decrease in the central retinal artery diameter and an increase in the central retinal vein diameter, both of which are associated with lacunar stroke and intracranial hemorrhage. Additionally, microvascular changes, such as arteriovenous nicking, increased vessel tortuosity, enhanced arteriolar light reflex, decreased retinal fractals, and thinning of retinal nerve fiber layer are also reported to be associated with higher stroke risk. AI models, such as Xception and EfficientNet, have demonstrated accuracy comparable to traditional stroke risk scoring systems in predicting stroke risk. For stroke diagnosis, models like Inception, ResNet, and VGG, alongside machine learning classifiers, have shown high efficacy in distinguishing stroke patients from healthy individuals using retinal imaging. Moreover, a random forest model effectively distinguished between ischemic and hemorrhagic stroke subtypes based on retinal features, showing superior predictive performance compared to traditional clinical characteristics. Additionally, a support vector machine model has achieved high classification accuracy in assessing pial collateral status. Despite this advancements, challenges such as the lack of standardized protocols for imaging modalities, hesitance in trusting AI-generated predictions, insufficient integration of retinal imaging data with electronic health records, the need for validation across diverse populations, and ethical and regulatory concerns persist. Future efforts must focus on validating AI models across diverse populations, ensuring algorithm transparency, and addressing ethical and regulatory issues to enable broader implementation. Overcoming these barriers will be essential for translating this technology into personalized stroke care and improving patient outcomes.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1490603"},"PeriodicalIF":2.1,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11872910/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143540686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hafza Ayesha Siddiqa, Muhammad Farrukh Qureshi, Arsalan Khurshid, Yan Xu, Laishuan Wang, Saadullah Farooq Abbasi, Chen Chen, Wei Chen
{"title":"EEG electrode setup optimization using feature extraction techniques for neonatal sleep state classification.","authors":"Hafza Ayesha Siddiqa, Muhammad Farrukh Qureshi, Arsalan Khurshid, Yan Xu, Laishuan Wang, Saadullah Farooq Abbasi, Chen Chen, Wei Chen","doi":"10.3389/fncom.2025.1506869","DOIUrl":"https://doi.org/10.3389/fncom.2025.1506869","url":null,"abstract":"<p><p>An optimal arrangement of electrodes during data collection is essential for gaining a deeper understanding of neonatal sleep and assessing cognitive health in order to reduce technical complexity and reduce skin irritation risks. Using electroencephalography (EEG) data, a long-short-term memory (LSTM) classifier categorizes neonatal sleep states. An 16,803 30-second segment was collected from 64 infants between 36 and 43 weeks of age at Fudan University Children's Hospital to train and test the proposed model. To enhance the performance of an LSTM-based classification model, 94 linear and nonlinear features in the time and frequency domains with three novel features (Detrended Fluctuation Analysis (DFA), Lyapunov exponent, and multiscale fluctuation entropy) are extracted. An imbalance between classes is solved using the SMOTE technique. In addition, the most significant features are identified and prioritized using principal component analysis (PCA). In comparison to other single channels, the C3 channel has an accuracy value of 80.75% ± 0.82%, with a kappa value of 0.76. Classification accuracy for four left-side electrodes is higher (82.71% ± 0.88%) than for four right-side electrodes (81.14% ± 0.77%), while kappa values are respectively 0.78 and 0.76. Study results suggest that specific EEG channels play an important role in determining sleep stage classification, as well as suggesting optimal electrode configuration. Moreover, this research can be used to improve neonatal care by monitoring sleep, which can allow early detection of sleep disorders. As a result, this study captures information effectively using a single channel, reducing computing load and maintaining performance at the same time. With the incorporation of time and frequency-domain linear and nonlinear features into sleep staging, newborn sleep dynamics and irregularities can be better understood.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1506869"},"PeriodicalIF":2.1,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11825521/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143440503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lijuan Yang, Qiumei Dong, Da Lin, Chunfang Tian, Xinliang Lü
{"title":"MUNet: a novel framework for accurate brain tumor segmentation combining UNet and mamba networks.","authors":"Lijuan Yang, Qiumei Dong, Da Lin, Chunfang Tian, Xinliang Lü","doi":"10.3389/fncom.2025.1513059","DOIUrl":"10.3389/fncom.2025.1513059","url":null,"abstract":"<p><p>Brain tumors are one of the major health threats to humans, and their complex pathological features and anatomical structures make accurate segmentation and detection crucial. However, existing models based on Transformers and Convolutional Neural Networks (CNNs) still have limitations in medical image processing. While Transformers are proficient in capturing global features, they suffer from high computational complexity and require large amounts of data for training. On the other hand, CNNs perform well in extracting local features but have limited performance when handling global information. To address these issues, this paper proposes a novel network framework, MUNet, which combines the advantages of UNet and Mamba, specifically designed for brain tumor segmentation. MUNet introduces the SD-SSM module, which effectively captures both global and local features of the image through selective scanning and state-space modeling, significantly improving segmentation accuracy. Additionally, we design the SD-Conv structure, which reduces feature redundancy without increasing model parameters, further enhancing computational efficiency. Finally, we propose a new loss function that combines mIoU loss, Dice loss, and Boundary loss, which improves segmentation overlap, similarity, and boundary accuracy from multiple perspectives. Experimental results show that, on the BraTS2020 dataset, MUNet achieves DSC values of 0.835, 0.915, and 0.823 for enhancing tumor (ET), whole tumor (WT), and tumor core (TC), respectively, and Hausdorff95 scores of 2.421, 3.755, and 6.437. On the BraTS2018 dataset, MUNet achieves DSC values of 0.815, 0.901, and 0.815, with Hausdorff95 scores of 4.389, 6.243, and 6.152, all outperforming existing methods and achieving significant performance improvements. Furthermore, when validated on the independent LGG dataset, MUNet demonstrated excellent generalization ability, proving its effectiveness in various medical imaging scenarios. The code is available at https://github.com/Dalin1977331/MUNet.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1513059"},"PeriodicalIF":2.1,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11814164/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143406552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sumaira Tabassum, M Jawad Khan, Javaid Iqbal, Asim Waris, M Adeel Ijaz
{"title":"Automated karyogram analysis for early detection of genetic and neurodegenerative disorders: a hybrid machine learning approach.","authors":"Sumaira Tabassum, M Jawad Khan, Javaid Iqbal, Asim Waris, M Adeel Ijaz","doi":"10.3389/fncom.2024.1525895","DOIUrl":"10.3389/fncom.2024.1525895","url":null,"abstract":"<p><p>Anomalous chromosomes are the cause of genetic diseases such as cancer, Alzheimer's, Parkinson's, epilepsy, and autism. Karyotype analysis is the standard procedure for diagnosing genetic disorders. Identifying anomalies is often costly, time-consuming, heavily reliant on expert interpretation, and requires considerable manual effort. Efforts are being made to automate karyogram analysis. However, the unavailability of large datasets, particularly those including samples with chromosomal abnormalities, presents a significant challenge. The development of automated models requires extensive labeled and incredibly abnormal data to accurately identify and analyze abnormalities, which are difficult to obtain in sufficient quantities. Although the deep learning-based architecture has yielded state-of-the-art performance in medical image anomaly detection, it cannot be generalized well because of the lack of anomalous datasets. This study introduces a novel hybrid approach that combines unsupervised and supervised learning techniques to overcome the challenges of limited labeled data and scalability in chromosomal analysis. An Autoencoder-based system is initially trained with unlabeled data to identify chromosome patterns. It is fine-tuned on labeled data, followed by a classification step using a Convolutional Neural Network (CNN). A unique dataset of 234,259 chromosome images, including the training, validation, and test sets, was used. Marking a significant achievement in the scale of chromosomal analysis. The proposed hybrid system accurately detects structural anomalies in individual chromosome images, achieving 99.3% accuracy in classifying normal and abnormal chromosomes. We also used a structural similarity index measure and template matching to identify the part of the abnormal chromosome that differed from the normal one. This automated model has the potential to significantly contribute to the early detection and diagnosis of chromosome-related disorders that affect both genetic health and neurological behavior.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1525895"},"PeriodicalIF":2.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794836/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143254741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiayi Zheng, Yaping Wan, Xin Yang, Hua Zhong, Minghua Du, Gang Wang
{"title":"Motion feature extraction using magnocellular-inspired spiking neural networks for drone detection.","authors":"Jiayi Zheng, Yaping Wan, Xin Yang, Hua Zhong, Minghua Du, Gang Wang","doi":"10.3389/fncom.2025.1452203","DOIUrl":"10.3389/fncom.2025.1452203","url":null,"abstract":"<p><p>Traditional object detection methods usually underperform when locating tiny or small drones against complex backgrounds, since the appearance features of the targets and the backgrounds are highly similar. To address this, inspired by the magnocellular motion processing mechanisms, we proposed to utilize the spatial-temporal characteristics of the flying drones based on spiking neural networks, thereby developing the Magno-Spiking Neural Network (MG-SNN) for drone detection. The MG-SNN can learn to identify potential regions of moving targets through motion saliency estimation and subsequently integrates the information into the popular object detection algorithms to design the retinal-inspired spiking neural network module for drone motion extraction and object detection architecture, which integrates motion and spatial features before object detection to enhance detection accuracy. To design and train the MG-SNN, we propose a new backpropagation method called Dynamic Threshold Multi-frame Spike Time Sequence (DT-MSTS), and establish a dataset for the training and validation of MG-SNN, effectively extracting and updating visual motion features. Experimental results in terms of drone detection performance indicate that the incorporation of MG-SNN significantly improves the accuracy of low-altitude drone detection tasks compared to popular small object detection algorithms, acting as a cheap plug-and-play module in detecting small flying targets against complex backgrounds.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"19 ","pages":"1452203"},"PeriodicalIF":2.1,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11794278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143364231","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Global remapping emerges as the mechanism for renewal of context-dependent behavior in a reinforcement learning model.","authors":"David Kappel, Sen Cheng","doi":"10.3389/fncom.2024.1462110","DOIUrl":"https://doi.org/10.3389/fncom.2024.1462110","url":null,"abstract":"<p><strong>Introduction: </strong>The hippocampal formation exhibits complex and context-dependent activity patterns and dynamics, e.g., place cell activity during spatial navigation in rodents or remapping of place fields when the animal switches between contexts. Furthermore, rodents show context-dependent renewal of extinguished behavior. However, the link between context-dependent neural codes and context-dependent renewal is not fully understood.</p><p><strong>Methods: </strong>We use a deep neural network-based reinforcement learning agent to study the learning dynamics that occur during spatial learning and context switching in a simulated ABA extinction and renewal paradigm in a 3D virtual environment.</p><p><strong>Results: </strong>Despite its simplicity, the network exhibits a number of features typically found in the CA1 and CA3 regions of the hippocampus. A significant proportion of neurons in deeper layers of the network are tuned to a specific spatial position of the agent in the environment-similar to place cells in the hippocampus. These complex spatial representations and dynamics occur spontaneously in the hidden layer of a deep network during learning. These spatial representations exhibit global remapping when the agent is exposed to a new context. The spatial maps are restored when the agent returns to the previous context, accompanied by renewal of the conditioned behavior. Remapping is facilitated by memory replay of experiences during training.</p><p><strong>Discussion: </strong>Our results show that integrated codes that jointly represent spatial and task-relevant contextual variables are the mechanism underlying renewal in a simulated DQN agent.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1462110"},"PeriodicalIF":2.1,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11774835/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143064655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How to be an integrated information theorist without losing your body.","authors":"Ignacio Cea, Camilo Miguel Signorelli","doi":"10.3389/fncom.2024.1510066","DOIUrl":"10.3389/fncom.2024.1510066","url":null,"abstract":"","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1510066"},"PeriodicalIF":2.1,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754206/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143028479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Memory consolidation from a reinforcement learning perspective.","authors":"Jong Won Lee, Min Whan Jung","doi":"10.3389/fncom.2024.1538741","DOIUrl":"10.3389/fncom.2024.1538741","url":null,"abstract":"<p><p>Memory consolidation refers to the process of converting temporary memories into long-lasting ones. It is widely accepted that new experiences are initially stored in the hippocampus as rapid associative memories, which then undergo a consolidation process to establish more permanent traces in other regions of the brain. Over the past two decades, studies in humans and animals have demonstrated that the hippocampus is crucial not only for memory but also for imagination and future planning, with the CA3 region playing a pivotal role in generating novel activity patterns. Additionally, a growing body of evidence indicates the involvement of the hippocampus, especially the CA1 region, in valuation processes. Based on these findings, we propose that the CA3 region of the hippocampus generates diverse activity patterns, while the CA1 region evaluates and reinforces those patterns most likely to maximize rewards. This framework closely parallels Dyna, a reinforcement learning algorithm introduced by Sutton in 1991. In Dyna, an agent performs offline simulations to supplement trial-and-error value learning, greatly accelerating the learning process. We suggest that memory consolidation might be viewed as a process of deriving optimal strategies based on simulations derived from limited experiences, rather than merely strengthening incidental memories. From this perspective, memory consolidation functions as a form of offline reinforcement learning, aimed at enhancing adaptive decision-making.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1538741"},"PeriodicalIF":2.1,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751224/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143022492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Syed Muhammad Usman, Shehzad Khalid, Aimen Tanveer, Ali Shariq Imran, Muhammad Zubair
{"title":"Multimodal consumer choice prediction using EEG signals and eye tracking.","authors":"Syed Muhammad Usman, Shehzad Khalid, Aimen Tanveer, Ali Shariq Imran, Muhammad Zubair","doi":"10.3389/fncom.2024.1516440","DOIUrl":"10.3389/fncom.2024.1516440","url":null,"abstract":"<p><p>Marketing plays a vital role in the success of a business, driving customer engagement, brand recognition, and revenue growth. Neuromarketing adds depth to this by employing insights into consumer behavior through brain activity and emotional responses to create more effective marketing strategies. Electroencephalogram (EEG) has typically been utilized by researchers for neuromarketing, whereas Eye Tracking (ET) has remained unexplored. To address this gap, we propose a novel multimodal approach to predict consumer choices by integrating EEG and ET data. Noise from EEG signals is mitigated using a bandpass filter, Artifact Subspace Reconstruction (ASR), and Fast Orthogonal Regression for Classification and Estimation (FORCE). Class imbalance is handled by employing the Synthetic Minority Over-sampling Technique (SMOTE). Handcrafted features, including statistical and wavelet features, and automated features from Convolutional Neural Network and Long Short-Term Memory (CNN-LSTM), have been extracted and concatenated to generate a feature space representation. For ET data, preprocessing involved interpolation, gaze plots, and SMOTE, followed by feature extraction using LeNet-5 and handcrafted features like fixations and saccades. Multimodal feature space representation was generated by performing feature-level fusion for EEG and ET, which was later fed into a meta-learner-based ensemble classifier with three base classifiers, including Random Forest, Extended Gradient Boosting, and Gradient Boosting, and Random Forest as the meta-classifier, to perform classification between buy vs. not buy. The performance of the proposed approach is evaluated using a variety of performance metrics, including accuracy, precision, recall, and F1 score. Our model demonstrated superior performance compared to competitors by achieving 84.01% accuracy in predicting consumer choices and 83% precision in identifying positive consumer preferences.</p>","PeriodicalId":12363,"journal":{"name":"Frontiers in Computational Neuroscience","volume":"18 ","pages":"1516440"},"PeriodicalIF":2.1,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751216/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143022677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}