Three theorems: Brain-like networks logically reason and optimally generalize

J. Weng
{"title":"Three theorems: Brain-like networks logically reason and optimally generalize","authors":"J. Weng","doi":"10.1109/IJCNN.2011.6033613","DOIUrl":null,"url":null,"abstract":"Finite Automata (FA) is a base net for many sophisticated probability-based systems of artificial intelligence. However, an FA processes symbols, instead of images that the brain senses and produces (e.g., sensory images and motor images). Of course, many recurrent artificial neural networks process images. However, their non-calibrated internal states prevent generalization, let alone the feasibility of immediate and error-free learning. I wish to report a general-purpose Developmental Program (DP) for a new type of, brain-anatomy inspired, networks — Developmental Networks (DNs). The new theoretical results here are summarized by three theorems. (1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, the DP incrementally develops a corresponding DN through the image codes of the symbolic inputs-outputs of the FA. The DN learning from the FA is incremental, immediate and error-free. (2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many image inputs and actions based on the embedded inner-product distance, state equivalence, and the principle of maximum likelihood. (3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood based on its past experience.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2011 International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2011.6033613","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15

Abstract

Finite Automata (FA) is a base net for many sophisticated probability-based systems of artificial intelligence. However, an FA processes symbols, instead of images that the brain senses and produces (e.g., sensory images and motor images). Of course, many recurrent artificial neural networks process images. However, their non-calibrated internal states prevent generalization, let alone the feasibility of immediate and error-free learning. I wish to report a general-purpose Developmental Program (DP) for a new type of, brain-anatomy inspired, networks — Developmental Networks (DNs). The new theoretical results here are summarized by three theorems. (1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, the DP incrementally develops a corresponding DN through the image codes of the symbolic inputs-outputs of the FA. The DN learning from the FA is incremental, immediate and error-free. (2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many image inputs and actions based on the embedded inner-product distance, state equivalence, and the principle of maximum likelihood. (3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood based on its past experience.
三个定理:类脑网络逻辑推理和最佳推广
有限自动机(FA)是许多复杂的基于概率的人工智能系统的基础网络。然而,FA处理的是符号,而不是大脑感知和产生的图像(例如,感觉图像和运动图像)。当然,许多递归人工神经网络处理图像。然而,它们的非校准内部状态阻碍了泛化,更不用说即时和无错误学习的可行性了。我想报告一个通用的发展计划(DP)为一种新的,脑解剖学启发,网络-发展网络(DNs)。这里的新理论结果可以用三个定理来概括。(1)从任何通过符号输入-输出序列展示人类知识的复杂FA中,DP通过FA的符号输入-输出的图像编码逐步开发出相应的DN。DN从FA学习是增量的、即时的和无错误的。(2)学习FA后,如果DN冻结学习但运行,则基于嵌入内积距离、状态等价和最大似然原则,对无限多个图像输入和动作进行最优泛化。(3)在学习FA后,如果DN继续学习和运行,则它基于过去的经验,在最大似然意义上“认为”最优。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信