{"title":"The Future of the Brain and Beyond","authors":"N. Howard","doi":"10.1109/iccicc46617.2019.9146078","DOIUrl":null,"url":null,"abstract":"In order to study brain function, some researchers have attempted to reverse-engineer neuronal networks and even the brain itself. This approach was based on the assumption that neurons in-vivo acted just like simple transistors in-silico. Unfortunately, both network and whole-brain modeling based on this premise have led to very little insight into actual brain function. The evidence for this claim is two-fold. First, the amount of energy needed to operate computing machinery that isn't anywhere near as complex as the human brain still requires much more energy than the latter. Second, because transistor-based computing reacts to static events whilst neurons can react to processes, properties inherent to computing architectures hardware prevent the true level of complexity and connectivity achieved in the human brain from being realized in-silico. In contrast to transistors, neurons can establish and change their connections and vary their signaling properties according to a variety of rules, allowing them to adapt to circumstances, self-assemble, auto-calibrate and store information by changing their properties according to experience (Laughlin & Sejnowski, 2003). In this speech, we elaborate on this evidence, and argue that there is a need to re-think the way we approach brain computation. In particular, we argue for a detailed understanding of neuronal function and network organization is required prior to neuronal network modeling attempt.","PeriodicalId":294902,"journal":{"name":"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 18th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/iccicc46617.2019.9146078","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
In order to study brain function, some researchers have attempted to reverse-engineer neuronal networks and even the brain itself. This approach was based on the assumption that neurons in-vivo acted just like simple transistors in-silico. Unfortunately, both network and whole-brain modeling based on this premise have led to very little insight into actual brain function. The evidence for this claim is two-fold. First, the amount of energy needed to operate computing machinery that isn't anywhere near as complex as the human brain still requires much more energy than the latter. Second, because transistor-based computing reacts to static events whilst neurons can react to processes, properties inherent to computing architectures hardware prevent the true level of complexity and connectivity achieved in the human brain from being realized in-silico. In contrast to transistors, neurons can establish and change their connections and vary their signaling properties according to a variety of rules, allowing them to adapt to circumstances, self-assemble, auto-calibrate and store information by changing their properties according to experience (Laughlin & Sejnowski, 2003). In this speech, we elaborate on this evidence, and argue that there is a need to re-think the way we approach brain computation. In particular, we argue for a detailed understanding of neuronal function and network organization is required prior to neuronal network modeling attempt.