{"title":"Can multivariate Granger causality detect directed connectivity of a multistable and dynamic biological decision network model?","authors":"Abdoreza Asadpour, KongFatt Wong-Lin","doi":"arxiv-2408.01528","DOIUrl":null,"url":null,"abstract":"Extracting causal connections can advance interpretable AI and machine\nlearning. Granger causality (GC) is a robust statistical method for estimating\ndirected influences (DC) between signals. While GC has been widely applied to\nanalysing neuronal signals in biological neural networks and other domains, its\napplication to complex, nonlinear, and multistable neural networks is less\nexplored. In this study, we applied time-domain multi-variate Granger causality\n(MVGC) to the time series neural activity of all nodes in a trained multistable\nbiologically based decision neural network model with real-time decision\nuncertainty monitoring. Our analysis demonstrated that challenging two-choice\ndecisions, where input signals could be closely matched, and the appropriate\napplication of fine-grained sliding time windows, could readily reveal the\noriginal model's DC. Furthermore, the identified DC varied based on whether the\nnetwork had correct or error decisions. Integrating the identified DC from\ndifferent decision outcomes recovered most of the original model's\narchitecture, despite some spurious and missing connectivity. This approach\ncould be used as an initial exploration to enhance the interpretability and\ntransparency of dynamic multistable and nonlinear biological or AI systems by\nrevealing causal connections throughout different phases of neural network\ndynamics and outcomes.","PeriodicalId":501517,"journal":{"name":"arXiv - QuanBio - Neurons and Cognition","volume":"51 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - QuanBio - Neurons and Cognition","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01528","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Extracting causal connections can advance interpretable AI and machine
learning. Granger causality (GC) is a robust statistical method for estimating
directed influences (DC) between signals. While GC has been widely applied to
analysing neuronal signals in biological neural networks and other domains, its
application to complex, nonlinear, and multistable neural networks is less
explored. In this study, we applied time-domain multi-variate Granger causality
(MVGC) to the time series neural activity of all nodes in a trained multistable
biologically based decision neural network model with real-time decision
uncertainty monitoring. Our analysis demonstrated that challenging two-choice
decisions, where input signals could be closely matched, and the appropriate
application of fine-grained sliding time windows, could readily reveal the
original model's DC. Furthermore, the identified DC varied based on whether the
network had correct or error decisions. Integrating the identified DC from
different decision outcomes recovered most of the original model's
architecture, despite some spurious and missing connectivity. This approach
could be used as an initial exploration to enhance the interpretability and
transparency of dynamic multistable and nonlinear biological or AI systems by
revealing causal connections throughout different phases of neural network
dynamics and outcomes.