{"title":"Three theorems: Brain-like networks logically reason and optimally generalize","authors":"J. Weng","doi":"10.1109/IJCNN.2011.6033613","DOIUrl":null,"url":null,"abstract":"Finite Automata (FA) is a base net for many sophisticated probability-based systems of artificial intelligence. However, an FA processes symbols, instead of images that the brain senses and produces (e.g., sensory images and motor images). Of course, many recurrent artificial neural networks process images. However, their non-calibrated internal states prevent generalization, let alone the feasibility of immediate and error-free learning. I wish to report a general-purpose Developmental Program (DP) for a new type of, brain-anatomy inspired, networks — Developmental Networks (DNs). The new theoretical results here are summarized by three theorems. (1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, the DP incrementally develops a corresponding DN through the image codes of the symbolic inputs-outputs of the FA. The DN learning from the FA is incremental, immediate and error-free. (2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many image inputs and actions based on the embedded inner-product distance, state equivalence, and the principle of maximum likelihood. (3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood based on its past experience.","PeriodicalId":415833,"journal":{"name":"The 2011 International Joint Conference on Neural Networks","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2011 International Joint Conference on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2011.6033613","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 15
Abstract
Finite Automata (FA) is a base net for many sophisticated probability-based systems of artificial intelligence. However, an FA processes symbols, instead of images that the brain senses and produces (e.g., sensory images and motor images). Of course, many recurrent artificial neural networks process images. However, their non-calibrated internal states prevent generalization, let alone the feasibility of immediate and error-free learning. I wish to report a general-purpose Developmental Program (DP) for a new type of, brain-anatomy inspired, networks — Developmental Networks (DNs). The new theoretical results here are summarized by three theorems. (1) From any complex FA that demonstrates human knowledge through its sequence of the symbolic inputs-outputs, the DP incrementally develops a corresponding DN through the image codes of the symbolic inputs-outputs of the FA. The DN learning from the FA is incremental, immediate and error-free. (2) After learning the FA, if the DN freezes its learning but runs, it generalizes optimally for infinitely many image inputs and actions based on the embedded inner-product distance, state equivalence, and the principle of maximum likelihood. (3) After learning the FA, if the DN continues to learn and run, it “thinks” optimally in the sense of maximum likelihood based on its past experience.