{"title":"An Algorithmic Theory for Conscious Learning","authors":"J. Weng","doi":"10.1145/3512826.3512827","DOIUrl":null,"url":null,"abstract":"The new conscious learning mode here is end-to-end (3D-to-2D-to-3D) and free from annotations of 2D images and 2D motor images, such as a bounding box for a patch to be attended to. The algorithm directly takes that of the Developmental Networks that has been previously published extensively with rich experimental results. This paper fills the huge gap between 3D world, to 2D sensory images and 2D motor images, back to 3D world so the conscious learning is end-to-end without a need for motor-impositions. This new conscious learning methodology is a major departure from traditional AI—handcrafting symbolic labels that tend to be brittle (e.g., for driverless cars) and then “spoon-feeding” pre-collected “big data”. The analysis here establishes that autonomous imitations as presented are a general mechanism in learning universal Turing machines. Autonomous imitations drastically reduce the teaching complexity compared to pre-collected “big data”, especially because no annotations of training data are needed. This learning mode is technically supported by a new kind of neural networks called Developmental Network-2 (DN-2) as an algorithmic basis, due to its incremental, non-iterative, on-the-fly learning mode along with the optimality (in the sense of maximum likelihood) in learning emergent super Turing machines from the open-ended real physical world. This work is directly related to electronics engineering because it requires large-scale on-the-fly brainoid chips in conscious learning robots.","PeriodicalId":270295,"journal":{"name":"Proceedings of the 2022 3rd International Conference on Artificial Intelligence in Electronics Engineering","volume":"220 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2022 3rd International Conference on Artificial Intelligence in Electronics Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3512826.3512827","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
The new conscious learning mode here is end-to-end (3D-to-2D-to-3D) and free from annotations of 2D images and 2D motor images, such as a bounding box for a patch to be attended to. The algorithm directly takes that of the Developmental Networks that has been previously published extensively with rich experimental results. This paper fills the huge gap between 3D world, to 2D sensory images and 2D motor images, back to 3D world so the conscious learning is end-to-end without a need for motor-impositions. This new conscious learning methodology is a major departure from traditional AI—handcrafting symbolic labels that tend to be brittle (e.g., for driverless cars) and then “spoon-feeding” pre-collected “big data”. The analysis here establishes that autonomous imitations as presented are a general mechanism in learning universal Turing machines. Autonomous imitations drastically reduce the teaching complexity compared to pre-collected “big data”, especially because no annotations of training data are needed. This learning mode is technically supported by a new kind of neural networks called Developmental Network-2 (DN-2) as an algorithmic basis, due to its incremental, non-iterative, on-the-fly learning mode along with the optimality (in the sense of maximum likelihood) in learning emergent super Turing machines from the open-ended real physical world. This work is directly related to electronics engineering because it requires large-scale on-the-fly brainoid chips in conscious learning robots.