Brain InformaticsPub Date : 2024-12-18DOI: 10.1186/s40708-024-00246-7
Hong Zhang, Zhikang Lu, Peicong Gong, Shilong Zhang, Xiaoquan Yang, Xiangning Li, Zhao Feng, Anan Li, Chi Xiao
{"title":"High-throughput mesoscopic optical imaging data processing and parsing using differential-guided filtered neural networks.","authors":"Hong Zhang, Zhikang Lu, Peicong Gong, Shilong Zhang, Xiaoquan Yang, Xiangning Li, Zhao Feng, Anan Li, Chi Xiao","doi":"10.1186/s40708-024-00246-7","DOIUrl":"10.1186/s40708-024-00246-7","url":null,"abstract":"<p><p>High-throughput mesoscopic optical imaging technology has tremendously boosted the efficiency of procuring massive mesoscopic datasets from mouse brains. Constrained by the imaging field of view, the image strips obtained by such technologies typically require further processing, such as cross-sectional stitching, artifact removal, and signal area cropping, to meet the requirements of subsequent analyse. However, obtaining a batch of raw array mouse brain data at a resolution of <math><mrow><mn>0.65</mn> <mo>×</mo> <mn>0.65</mn> <mo>×</mo> <mn>3</mn> <mspace></mspace> <mi>μ</mi> <msup><mtext>m</mtext> <mn>3</mn></msup> </mrow> </math> can reach 220TB, and the cropping of the outer contour areas in the disjointed processing still relies on manual visual observation, which consumes substantial computational resources and labor costs. In this paper, we design an efficient deep differential guided filtering module (DDGF) by fusing multi-scale iterative differential guided filtering with deep learning, which effectively refines image details while mitigating background noise. Subsequently, by amalgamating DDGF with deep learning network, we propose a lightweight deep differential guided filtering segmentation network (DDGF-SegNet), which demonstrates robust performance on our dataset, achieving Dice of 0.92, Precision of 0.98, Recall of 0.91, and Jaccard index of 0.86. Building on the segmentation, we utilize connectivity analysis for ascertaining three-dimensional spatial orientation of each brain within the array. Furthermore, we streamline the entire processing workflow by developing an automated pipeline optimized for cluster-based message passing interface(MPI) parallel computation, which reduces the processing time for a mouse brain dataset to a mere 1.1 h, enhancing manual efficiency by 25 times and overall data processing efficiency by 2.4 times, paving the way for enhancing the efficiency of big data processing and parsing for high-throughput mesoscopic optical imaging techniques.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"32"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655801/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-12-18DOI: 10.1186/s40708-024-00245-8
Yihang Dong, Changhong Jing, Mufti Mahmud, Michael Kwok-Po Ng, Shuqiang Wang
{"title":"Enhancing cross-subject emotion recognition precision through unimodal EEG: a novel emotion preceptor model.","authors":"Yihang Dong, Changhong Jing, Mufti Mahmud, Michael Kwok-Po Ng, Shuqiang Wang","doi":"10.1186/s40708-024-00245-8","DOIUrl":"10.1186/s40708-024-00245-8","url":null,"abstract":"<p><p>Affective computing is a key research area in computer science, neuroscience, and psychology, aimed at enabling computers to recognize, understand, and respond to human emotional states. As the demand for affective computing technology grows, emotion recognition methods based on physiological signals have become research hotspots. Among these, electroencephalogram (EEG) signals, which reflect brain activity, are highly promising. However, due to individual physiological and anatomical differences, EEG signals introduce noise, reducing emotion recognition performance. Additionally, the synchronous collection of multimodal data in practical applications requires high equipment and environmental standards, limiting the practical use of EEG signals. To address these issues, this study proposes the Emotion Preceptor, a cross-subject emotion recognition model based on unimodal EEG signals. This model introduces a Static Spatial Adapter to integrate spatial information in EEG signals, reducing individual differences and extracting robust encoding information. The Temporal Causal Network then leverages temporal information to extract beneficial features for emotion recognition, achieving precise recognition based on unimodal EEG signals. Extensive experiments on the SEED and SEED-V datasets demonstrate the superior performance of the Emotion Preceptor and validate the effectiveness of the new data processing method that combines DE features in a temporal sequence. Additionally, we analyzed the model's data flow and encoding methods from a biological interpretability perspective and validated it with neuroscience research related to emotion generation and regulation, promoting further development in emotion recognition research based on EEG signals.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"31"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655793/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-12-18DOI: 10.1186/s40708-024-00242-x
Rui Li, Xuanwen Yang, Jun Lou, Junsong Zhang
{"title":"A temporal-spectral graph convolutional neural network model for EEG emotion recognition within and across subjects.","authors":"Rui Li, Xuanwen Yang, Jun Lou, Junsong Zhang","doi":"10.1186/s40708-024-00242-x","DOIUrl":"10.1186/s40708-024-00242-x","url":null,"abstract":"<p><p>EEG-based emotion recognition uses high-level information from neural activities to predict emotional responses in subjects. However, this information is sparsely distributed in frequency, time, and spatial domains and varied across subjects. To address these challenges in emotion recognition, we propose a novel neural network model named Temporal-Spectral Graph Convolutional Network (TSGCN). To capture high-level information distributed in time, spatial, and frequency domains, TSGCN considers both neural oscillation changes in different time windows and topological structures between different brain regions. Specifically, a Minimum Category Confusion (MCC) loss is used in TSGCN to reduce the inconsistencies between subjective ratings and predefined labels. In addition, to improve the generalization of TSGCN on cross-subject variation, we propose Deep and Shallow feature Dynamic Adversarial Learning (DSDAL) to calculate the distance between the source domain and the target domain. Extensive experiments were conducted on public datasets to demonstrate that TSGCN outperforms state-of-the-art methods in EEG-based emotion recognition. Ablation studies show that the mixed neural networks and our proposed methods in TSGCN significantly contribute to its high performance and robustness. Detailed investigations further provide the effectiveness of TSGCN in addressing the challenges in emotion recognition.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"30"},"PeriodicalIF":0.0,"publicationDate":"2024-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11655824/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142847680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-12-05DOI: 10.1186/s40708-024-00243-w
Tianhua Chen
{"title":"Can heart rate sequences from wearable devices predict day-long mental states in higher education students: a signal processing and machine learning case study at a UK university.","authors":"Tianhua Chen","doi":"10.1186/s40708-024-00243-w","DOIUrl":"10.1186/s40708-024-00243-w","url":null,"abstract":"<p><p>The mental health of students in higher education has been a growing concern, with increasing evidence pointing to heightened risks of developing mental health condition. This research aims to explore whether day-long heart rate sequences, collected continuously through Apple Watch in an open environment without restrictions on daily routines, can effectively indicate mental states, particularly stress for university students. While heart rate (HR) is commonly used to monitor physical activity or responses to isolated stimuli in a controlled setting, such as stress-inducing tests, this study addresses the gap by analyzing heart rate fluctuations throughout a day, examining their potential to gauge overall stress levels in a more comprehensive and real-world context. The data for this research was collected at a public university in the UK. Using signal processing, both original heart rate sequences and their representations, via Fourier transformation and wavelet analysis, have been modeled using advanced machine learning algorithms. Having achieving statistically significant results over the baseline, this provides a understanding of how heart rate sequences alone may be used to characterize mental states through signal processing and machine learning, with the system poised for further testing as the ongoing data collection continues.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"29"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11621279/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142787355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-11-21DOI: 10.1186/s40708-024-00241-y
Tianning Li, Yi Huang, Peng Wen, Yan Li
{"title":"Accurate depth of anesthesia monitoring based on EEG signal complexity and frequency features.","authors":"Tianning Li, Yi Huang, Peng Wen, Yan Li","doi":"10.1186/s40708-024-00241-y","DOIUrl":"10.1186/s40708-024-00241-y","url":null,"abstract":"<p><p>Accurate monitoring of the depth of anesthesia (DoA) is essential for ensuring patient safety and effective anesthesia management. Existing methods, such as the Bispectral Index (BIS), are limited in real-time accuracy and robustness. Current methods have problems in generalizability across diverse patient datasets and are sensitive to artifacts, making it difficult to provide reliable DoA assessments in real time. This study proposes a novel method for DoA monitoring using EEG signals, focusing on accuracy, robustness, and real-time application. EEG signals were pre-processed using wavelet denoising and discrete wavelet transform (DWT). Features such as Permutation Lempel-Ziv Complexity (PLZC) and Power Spectral Density (PSD) were extracted. A random forest regression model was employed to estimate anesthetic states, and an unsupervised learning method using the Hurst exponent algorithm and hierarchical clustering was introduced to detect transitions between anesthesia states. The method was tested on two independent datasets (UniSQ and VitalDB), achieving an average Pearson correlation coefficient of 0.86 and 0.82, respectively. For the combined dataset, the model demonstrated an R-squared value of 0.70, a RMSE of 6.31, a MAE of 8.38, and a Pearson correlation of 0.84, showcasing its robustness and generalizability. This approach offers a more accurate and reliable real-time DoA monitoring tool that could significantly improve patient safety and anesthesia management, especially in diverse clinical environments.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"28"},"PeriodicalIF":0.0,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11582228/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142682781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-10-26DOI: 10.1186/s40708-024-00239-6
Mats Tveter, Thomas Tveitstøl, Christoffer Hatlestad-Hall, Ana S Pérez T, Erik Taubøll, Anis Yazidi, Hugo L Hammer, Ira R J Hebold Haraldsen
{"title":"Advancing EEG prediction with deep learning and uncertainty estimation.","authors":"Mats Tveter, Thomas Tveitstøl, Christoffer Hatlestad-Hall, Ana S Pérez T, Erik Taubøll, Anis Yazidi, Hugo L Hammer, Ira R J Hebold Haraldsen","doi":"10.1186/s40708-024-00239-6","DOIUrl":"10.1186/s40708-024-00239-6","url":null,"abstract":"<p><p>Deep Learning (DL) has the potential to enhance patient outcomes in healthcare by implementing proficient systems for disease detection and diagnosis. However, the complexity and lack of interpretability impede their widespread adoption in critical high-stakes predictions in healthcare. Incorporating uncertainty estimations in DL systems can increase trustworthiness, providing valuable insights into the model's confidence and improving the explanation of predictions. Additionally, introducing explainability measures, recognized and embraced by healthcare experts, can help address this challenge. In this study, we investigate DL models' ability to predict sex directly from electroencephalography (EEG) data. While sex prediction have limited direct clinical application, its binary nature makes it a valuable benchmark for optimizing deep learning techniques in EEG data analysis. Furthermore, we explore the use of DL ensembles to improve performance over single models and as an approach to increase interpretability and performance through uncertainty estimation. Lastly, we use a data-driven approach to evaluate the relationship between frequency bands and sex prediction, offering insights into their relative importance. InceptionNetwork, a single DL model, achieved 90.7% accuracy and an AUC of 0.947, and the best-performing ensemble, combining variations of InceptionNetwork and EEGNet, achieved 91.1% accuracy in predicting sex from EEG data using five-fold cross-validation. Uncertainty estimation through deep ensembles led to increased prediction performance, and the models were able to classify sex in all frequency bands, indicating sex-specific features across all bands.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"27"},"PeriodicalIF":0.0,"publicationDate":"2024-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11512943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-10-22DOI: 10.1186/s40708-024-00240-z
Garrett Greiner, Yu Zhang
{"title":"Multi-modal EEG NEO-FFI with Trained Attention Layer (MENTAL) for mental disorder prediction.","authors":"Garrett Greiner, Yu Zhang","doi":"10.1186/s40708-024-00240-z","DOIUrl":"10.1186/s40708-024-00240-z","url":null,"abstract":"<p><p>Early detection and accurate diagnosis of mental disorders is difficult due to the complexity of the diagnostic process, resulting in conditions being left undiagnosed or misdiagnosed. Previous studies have demonstrated that features of Electroencephalogram (EEG) data, such as Power Spectral Density (PSD), are useful biomarkers for indicating the onset of various mental disorders. Existing models using EEG data are typically trained to distinguish between healthy and afflicted individuals, and they are unable to distinguish between individuals with different disorders. We propose MENTAL (Multi-modal EEG NEO-FFI with Trained Attention Layer) to predict an individual's mental state using both EEG and Neo-Five Factor Inventory (NEO-FFI) personality data. We include an attention layer that captures the interactions between personality traits and PSD features, and emphasizes the important PSD features. MENTAL features a Recurrent Neural Network (RNN) to model the temporal nature of EEG data. We train our model with the Two Decades Brainclinics Research Archive for Insights in Neuroscience (TDBRAIN) dataset, which consists of 1274 healthy and psychiatric individuals including over 30 different diagnoses. MENTAL is able to achieve 93.3% accuracy when trained to classify between healthy individuals and those with ADHD. When trained to identify individuals with ADHD from among 33 disorder classes, MENTAL improves accuracy from 70.5 to 81.7%. MENTAL also demonstrates over 20% improvement in predictive accuracy when trained for MDD prediction. For the multi-class classification task of three classes, MENTAL improves accuracy by 4%, and for five classes, by nearly 8%. MENTAL is the first multi-modal model that utilizes both EEG and NEO-FFI data for the task of mental disorder prediction. We are one of the first groups to utilize TDBRAIN for automated disorder classification. MENTAL is accessible and cost-effective, as EEG is more affordable than other neuroimaging methods such as MRI, and the NEO-FFI is a self- reported survey. Our model can lead to acceptance and support for individuals living with mental health challenges and improve quality of life in our society.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"26"},"PeriodicalIF":0.0,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11496460/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142476800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-10-03DOI: 10.1186/s40708-024-00238-7
Noushath Shaffi, Vimbi Viswan, Mufti Mahmud
{"title":"Ensemble of vision transformer architectures for efficient Alzheimer's Disease classification.","authors":"Noushath Shaffi, Vimbi Viswan, Mufti Mahmud","doi":"10.1186/s40708-024-00238-7","DOIUrl":"10.1186/s40708-024-00238-7","url":null,"abstract":"<p><p>Transformers have dominated the landscape of Natural Language Processing (NLP) and revolutionalized generative AI applications. Vision Transformers (VT) have recently become a new state-of-the-art for computer vision applications. Motivated by the success of VTs in capturing short and long-range dependencies and their ability to handle class imbalance, this paper proposes an ensemble framework of VTs for the efficient classification of Alzheimer's Disease (AD). The framework consists of four vanilla VTs, and ensembles formed using hard and soft-voting approaches. The proposed model was tested using two popular AD datasets: OASIS and ADNI. The ADNI dataset was employed to assess the models' efficacy under imbalanced and data-scarce conditions. The ensemble of VT saw an improvement of around 2% compared to individual models. Furthermore, the results are compared with state-of-the-art and custom-built Convolutional Neural Network (CNN) architectures and Machine Learning (ML) models under varying data conditions. The experimental results demonstrated an overall performance gain of 4.14% and 4.72% accuracy over the ML and CNN algorithms, respectively. The study has also identified specific limitations and proposes avenues for future research. The codes used in the study are made publicly available.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"25"},"PeriodicalIF":0.0,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11450128/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142373055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-09-26DOI: 10.1186/s40708-024-00236-9
Changshan Li, Youqi Li, Hu Zhao, Liya Ding
{"title":"Enhancing brain image quality with 3D U-net for stripe removal in light sheet fluorescence microscopy.","authors":"Changshan Li, Youqi Li, Hu Zhao, Liya Ding","doi":"10.1186/s40708-024-00236-9","DOIUrl":"https://doi.org/10.1186/s40708-024-00236-9","url":null,"abstract":"<p><p>Light Sheet Fluorescence Microscopy (LSFM) is increasingly popular in neuroimaging for its ability to capture high-resolution 3D neural data. However, the presence of stripe noise significantly degrades image quality, particularly in complex 3D stripes with varying widths and brightness, posing challenges in neuroscience research. Existing stripe removal algorithms excel in suppressing noise and preserving details in 2D images with simple stripes but struggle with the complexity of 3D stripes. To address this, we propose a novel 3D U-net model for Stripe Removal in Light sheet fluorescence microscopy (USRL). This approach directly learns and removes stripes in 3D space across different scales, employing a dual-resolution strategy to effectively handle stripes of varying complexities. Additionally, we integrate a nonlinear mapping technique to normalize high dynamic range and unevenly distributed data before applying the stripe removal algorithm. We validate our method on diverse datasets, demonstrating substantial improvements in peak signal-to-noise ratio (PSNR) compared to existing algorithms. Moreover, our algorithm exhibits robust performance when applied to real LSFM data. Through extensive validation experiments, both on test sets and real-world data, our approach outperforms traditional methods, affirming its effectiveness in enhancing image quality. Furthermore, the adaptability of our algorithm extends beyond LSFM applications to encompass other imaging modalities. This versatility underscores its potential to enhance image usability across various research disciplines.</p>","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"11 1","pages":"24"},"PeriodicalIF":0.0,"publicationDate":"2024-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11427638/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brain InformaticsPub Date : 2024-09-14DOI: 10.1186/s40708-024-00237-8
Hui Wei, Chenyue Feng, Fushun Li
{"title":"Modeling biological memory network by an autonomous and adaptive multi-agent system","authors":"Hui Wei, Chenyue Feng, Fushun Li","doi":"10.1186/s40708-024-00237-8","DOIUrl":"https://doi.org/10.1186/s40708-024-00237-8","url":null,"abstract":"At the intersection of computation and cognitive science, graph theory is utilized as a formalized description of complex relationships description of complex relationships and structures, but traditional graph models are static, lack the dynamic and autonomous behaviors of biological neural networks, rely on algorithms with a global view. This study introduces a multi-agent system (MAS) model based on the graph theory, each agent equipped with adaptive learning and decision-making capabilities, thereby facilitating decentralized dynamic information memory, modeling and simulation of the brain’s memory process. This decentralized approach transforms memory storage into the management of MAS paths, with each agent utilizing localized information for the dynamic formation and modification of these paths, different path refers to different memory instance. The model’s unique memory algorithm avoids a global view, instead relying on neighborhood-based interactions to enhance resource utilization. Emulating neuron electrophysiology, each agent’s adaptive learning behavior is represented through a microcircuit centered around a variable resistor. Using principles of Ohm’s and Kirchhoff’s laws, we validated the model’s efficacy in memorizing and retrieving data through computer simulations. This approach offers a plausible neurobiological explanation for memory realization and validates the memory trace theory at a system level.","PeriodicalId":37465,"journal":{"name":"Brain Informatics","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142253479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}