Connectionist models of mind: scales and the limits of machine imitation

P. Baryshnikov
{"title":"Connectionist models of mind: scales and the limits of machine imitation","authors":"P. Baryshnikov","doi":"10.17726/philit.2020.2.4","DOIUrl":null,"url":null,"abstract":"This paper is devoted to some generalizations of explanatory potential of connectionist approaches to theoretical problems of the philosophy of mind. Are considered both strong, and weaknesses of neural network models. Connectionism has close methodological ties with modern neurosciences and neurophilosophy. And this fact strengthens its positions, in terms of empirical naturalistic approaches. However, at the same time this direction inherits weaknesses of computational approach, and in this case all system of anticomputational critical arguments becomes applicable to the connectionst models of mind. The last developments in the field of deep learning gave rich empirical material for cognitive sciences. Multilayered networks, mathematical models of associative dynamics of learning, self-organizing neuronets and all that allow to explain the principles of human conceptual organizing and after this to emulate these processes in computer systems. At all engineering achievements of this technology there is a traditional criticism from representatives of cognitive psychology who cannot accept a thesis about learning ability of a neuronet on the basis of redistribution of scales. Process of learning of natural intelligence, according to cognitive models, happens due to attraction of knowledge broadcast in a symbolical form (mental representations, concepts) at the expense of the systems of output knowledge expressed in the propositional contents. Some philosophical aspects of «neural metaphor» in modern cognitive sciences create the problem field which demands comprehensive understanding, the first step towards which is taken in this work.","PeriodicalId":421399,"journal":{"name":"Philosophical Problems of Information Technologies and Cyberspace","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Philosophical Problems of Information Technologies and Cyberspace","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17726/philit.2020.2.4","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

This paper is devoted to some generalizations of explanatory potential of connectionist approaches to theoretical problems of the philosophy of mind. Are considered both strong, and weaknesses of neural network models. Connectionism has close methodological ties with modern neurosciences and neurophilosophy. And this fact strengthens its positions, in terms of empirical naturalistic approaches. However, at the same time this direction inherits weaknesses of computational approach, and in this case all system of anticomputational critical arguments becomes applicable to the connectionst models of mind. The last developments in the field of deep learning gave rich empirical material for cognitive sciences. Multilayered networks, mathematical models of associative dynamics of learning, self-organizing neuronets and all that allow to explain the principles of human conceptual organizing and after this to emulate these processes in computer systems. At all engineering achievements of this technology there is a traditional criticism from representatives of cognitive psychology who cannot accept a thesis about learning ability of a neuronet on the basis of redistribution of scales. Process of learning of natural intelligence, according to cognitive models, happens due to attraction of knowledge broadcast in a symbolical form (mental representations, concepts) at the expense of the systems of output knowledge expressed in the propositional contents. Some philosophical aspects of «neural metaphor» in modern cognitive sciences create the problem field which demands comprehensive understanding, the first step towards which is taken in this work.
心灵的联结主义模型:机器模仿的尺度和限制
本文致力于对心灵哲学理论问题的联结主义方法的解释潜力进行一些概括。都被认为是神经网络模型的优点和缺点。联结主义在方法论上与现代神经科学和神经哲学有着密切的联系。从经验自然主义的角度来看,这个事实加强了它的地位。然而,与此同时,这一方向继承了计算方法的弱点,在这种情况下,所有的反计算批判论证系统都适用于心智的连接主义模型。深度学习领域的最新发展为认知科学提供了丰富的经验材料。多层网络,学习关联动力学的数学模型,自组织神经元,以及所有能够解释人类概念组织原理的东西,然后在计算机系统中模拟这些过程。在这项技术的所有工程成就中,有一种传统的批评来自认知心理学的代表,他们不能接受一个基于尺度再分配的关于神经元学习能力的论文。根据认知模型,自然智能的学习过程是由于以符号形式(心理表征、概念)传播的知识的吸引而发生的,而牺牲了以命题内容表达的输出知识的系统。现代认知科学中“神经隐喻”的一些哲学方面创造了需要全面理解的问题领域,这是本工作中采取的第一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信