{"title":"How should we define Information Flow in Neural Circuits?","authors":"Praveen Venkatesh, Sanghamitra Dutta, P. Grover","doi":"10.1109/ISIT.2019.8849411","DOIUrl":null,"url":null,"abstract":"We develop a theoretical framework for defining information flow in neural circuits, within the context of \"eventrelated\" experimental paradigms in neuroscience. Here, a neural circuit is modeled as a directed graph, with \"clocked\" nodes that send transmissions to each other along the edges of the graph at discrete points in time. We are interested in a definition that captures the flow of \"stimulus\"-related information, and which guarantees a continuous information path between appropriately defined inputs and outputs in the directed graph. Prior measures, including those based on Granger Causality and Directed Information, fail to provide clear assumptions and guarantees about when they correctly reflect stimulus-related information flow, due to the absence of a theoretical foundation with a mathematical definition. We take a methodical approach— iterating through candidate definitions and counterexamples— to arrive at a definition for information flow that is based on conditional mutual information, and which satisfies desirable properties, including the existence of information paths.","PeriodicalId":6708,"journal":{"name":"2019 IEEE International Symposium on Information Theory (ISIT)","volume":"5 1","pages":"176-180"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Information Theory (ISIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISIT.2019.8849411","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
We develop a theoretical framework for defining information flow in neural circuits, within the context of "eventrelated" experimental paradigms in neuroscience. Here, a neural circuit is modeled as a directed graph, with "clocked" nodes that send transmissions to each other along the edges of the graph at discrete points in time. We are interested in a definition that captures the flow of "stimulus"-related information, and which guarantees a continuous information path between appropriately defined inputs and outputs in the directed graph. Prior measures, including those based on Granger Causality and Directed Information, fail to provide clear assumptions and guarantees about when they correctly reflect stimulus-related information flow, due to the absence of a theoretical foundation with a mathematical definition. We take a methodical approach— iterating through candidate definitions and counterexamples— to arrive at a definition for information flow that is based on conditional mutual information, and which satisfies desirable properties, including the existence of information paths.