{"title":"用ANIMA学习","authors":"R. Lutskanov","doi":"10.5840/bjp202113221","DOIUrl":null,"url":null,"abstract":"The paper develops a semi-formal model of learning which modifies the traditional paradigm of artificial neural networks, implementing deep learning by means of a key insight borrowed from the works of Marvin Minsky: the so-called Principle of Non-Compromise. The principle provides a learning mechanism which states that conflicts in the processing of data to be integrated are a mark of unreliability or irrelevance; hence, lower-level conflicts should lead to higher-level weight-adjustments. This internal mechanism augments the external mechanism of weight adjustment by back-propagation, which is typical for the standard models of machine learning. The text is structured as follows: (§1) opens the discussion by providing an informal overview of real-world decision-making and learning; (§2) sketches a typology of decision architectures: the individualistic approach of classical decision theory, the general aggregation mechanism of social choice theory, the local aggregation mechanism of agent-based modeling, and the intermediate hierarchical model of Marvin Minsky's “Society of Mind”; (§3) sketches the general outline of ANIMA – a new model of decision-making and learning that borrows insights from Minsky's informal exposition; (§4) is the bulk of the paper; it provides a discussion of a toy exemplification of ANIMA which lets us see the Principle of Non-Compromise at work; (§5) lists some possible scenarios for the evolution of a model of this kind; (§6) is the closing section; it discusses some important differences between the way ANIMA was construed here and the typical formal rendering of learning by means of artificial neural networks and deep learning.","PeriodicalId":41126,"journal":{"name":"Balkan Journal of Philosophy","volume":"1 1","pages":""},"PeriodicalIF":0.1000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Learning with ANIMA\",\"authors\":\"R. Lutskanov\",\"doi\":\"10.5840/bjp202113221\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper develops a semi-formal model of learning which modifies the traditional paradigm of artificial neural networks, implementing deep learning by means of a key insight borrowed from the works of Marvin Minsky: the so-called Principle of Non-Compromise. The principle provides a learning mechanism which states that conflicts in the processing of data to be integrated are a mark of unreliability or irrelevance; hence, lower-level conflicts should lead to higher-level weight-adjustments. This internal mechanism augments the external mechanism of weight adjustment by back-propagation, which is typical for the standard models of machine learning. The text is structured as follows: (§1) opens the discussion by providing an informal overview of real-world decision-making and learning; (§2) sketches a typology of decision architectures: the individualistic approach of classical decision theory, the general aggregation mechanism of social choice theory, the local aggregation mechanism of agent-based modeling, and the intermediate hierarchical model of Marvin Minsky's “Society of Mind”; (§3) sketches the general outline of ANIMA – a new model of decision-making and learning that borrows insights from Minsky's informal exposition; (§4) is the bulk of the paper; it provides a discussion of a toy exemplification of ANIMA which lets us see the Principle of Non-Compromise at work; (§5) lists some possible scenarios for the evolution of a model of this kind; (§6) is the closing section; it discusses some important differences between the way ANIMA was construed here and the typical formal rendering of learning by means of artificial neural networks and deep learning.\",\"PeriodicalId\":41126,\"journal\":{\"name\":\"Balkan Journal of Philosophy\",\"volume\":\"1 1\",\"pages\":\"\"},\"PeriodicalIF\":0.1000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Balkan Journal of Philosophy\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5840/bjp202113221\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"PHILOSOPHY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Balkan Journal of Philosophy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5840/bjp202113221","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
引用次数: 0
摘要
本文开发了一种半形式化的学习模型,该模型修改了人工神经网络的传统范式,通过借鉴马文·明斯基(Marvin Minsky)作品的关键见解来实现深度学习:所谓的不妥协原则。该原则提供了一种学习机制,该机制指出,要集成的数据处理中的冲突是不可靠或不相关的标志;因此,较低级别的冲突应该导致较高级别的权重调整。这种内部机制通过反向传播增强了权重调整的外部机制,这是典型的机器学习标准模型。文本结构如下:(§1)通过提供对现实世界决策和学习的非正式概述来开启讨论;(§2)概述了决策架构的类型:经典决策理论的个人主义方法,社会选择理论的一般聚集机制,基于主体的建模的局部聚集机制,以及马文·明斯基(Marvin Minsky)的“心智社会”(Society of Mind)的中间层次模型;(§3)勾勒出ANIMA的总体轮廓——一种借鉴明斯基非正式论述的决策和学习新模式;(§4)是论文的主体;它提供了一个关于ANIMA的玩具例子的讨论,让我们看到了不妥协原则的作用;(§5)列出了这类模型演变的一些可能情景;(§6)是结束部分;它讨论了这里解释ANIMA的方式与通过人工神经网络和深度学习进行学习的典型形式呈现之间的一些重要区别。
The paper develops a semi-formal model of learning which modifies the traditional paradigm of artificial neural networks, implementing deep learning by means of a key insight borrowed from the works of Marvin Minsky: the so-called Principle of Non-Compromise. The principle provides a learning mechanism which states that conflicts in the processing of data to be integrated are a mark of unreliability or irrelevance; hence, lower-level conflicts should lead to higher-level weight-adjustments. This internal mechanism augments the external mechanism of weight adjustment by back-propagation, which is typical for the standard models of machine learning. The text is structured as follows: (§1) opens the discussion by providing an informal overview of real-world decision-making and learning; (§2) sketches a typology of decision architectures: the individualistic approach of classical decision theory, the general aggregation mechanism of social choice theory, the local aggregation mechanism of agent-based modeling, and the intermediate hierarchical model of Marvin Minsky's “Society of Mind”; (§3) sketches the general outline of ANIMA – a new model of decision-making and learning that borrows insights from Minsky's informal exposition; (§4) is the bulk of the paper; it provides a discussion of a toy exemplification of ANIMA which lets us see the Principle of Non-Compromise at work; (§5) lists some possible scenarios for the evolution of a model of this kind; (§6) is the closing section; it discusses some important differences between the way ANIMA was construed here and the typical formal rendering of learning by means of artificial neural networks and deep learning.
期刊介绍:
The Balkan Journal of Philosophy is a peer-reviewed international periodical, academic in spirit, that publishes high-quality papers on current problems and discussions in philosophy. While open to all fields and interests, the journal devotes special attention to the treatment of philosophical problems in the Balkans and south-eastern Europe, and to their influence on the development of philosophy in this region. All papers are publisihed in English. BJP is published under the auspices of the Bulgarian Academy of Sciences.