机器理解需要意识吗?

IF 3.1 4区 医学 Q2 NEUROSCIENCES
R. Pepperell
{"title":"机器理解需要意识吗?","authors":"R. Pepperell","doi":"10.3389/fnsys.2022.788486","DOIUrl":null,"url":null,"abstract":"This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.","PeriodicalId":12649,"journal":{"name":"Frontiers in Systems Neuroscience","volume":" ","pages":""},"PeriodicalIF":3.1000,"publicationDate":"2022-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Does Machine Understanding Require Consciousness?\",\"authors\":\"R. Pepperell\",\"doi\":\"10.3389/fnsys.2022.788486\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.\",\"PeriodicalId\":12649,\"journal\":{\"name\":\"Frontiers in Systems Neuroscience\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":3.1000,\"publicationDate\":\"2022-05-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in Systems Neuroscience\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.3389/fnsys.2022.788486\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"NEUROSCIENCES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Systems Neuroscience","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.3389/fnsys.2022.788486","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"NEUROSCIENCES","Score":null,"Total":0}
引用次数: 2

摘要

这篇文章讨论了机器理解是否需要意识的问题。机器理解领域的一些研究人员认为,计算机没有必要有意识,只要它们在某些任务中能够达到或超过人类的表现。但是,尽管机器学习系统最近在自然语言处理和图像分类等领域取得了显著成功,但它们的有限性能以及它们的认知能力是否需要真正的理解,还是虚假相关性的产物,仍然存在重要问题。在这里,我区分了自然理解、人工理解和机器理解。我分析了自然理解的一些具体例子,并表明尽管它与当前机器学习系统中实现的人工理解有着共同的特性,但它也有一些本质的区别,主要的区别是人类的自然理解需要意识。此外,心理学和神经生物学的证据表明,正是这种意识能力,至少在一定程度上解释了人类在某些认知任务中的卓越表现,也可能解释了语义处理的真实性,而语义处理似乎是自然理解的标志。我提出了一个假设,这可能有助于解释为什么意识对理解很重要。最后,我建议,在机器中实现类人理解——机器理解——的进展可能受益于一种自然主义的方法,即在机械基底中尽可能紧密地模拟自然过程。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Does Machine Understanding Require Consciousness?
This article addresses the question of whether machine understanding requires consciousness. Some researchers in the field of machine understanding have argued that it is not necessary for computers to be conscious as long as they can match or exceed human performance in certain tasks. But despite the remarkable recent success of machine learning systems in areas such as natural language processing and image classification, important questions remain about their limited performance and about whether their cognitive abilities entail genuine understanding or are the product of spurious correlations. Here I draw a distinction between natural, artificial, and machine understanding. I analyse some concrete examples of natural understanding and show that although it shares properties with the artificial understanding implemented in current machine learning systems it also has some essential differences, the main one being that natural understanding in humans entails consciousness. Moreover, evidence from psychology and neurobiology suggests that it is this capacity for consciousness that, in part at least, explains for the superior performance of humans in some cognitive tasks and may also account for the authenticity of semantic processing that seems to be the hallmark of natural understanding. I propose a hypothesis that might help to explain why consciousness is important to understanding. In closing, I suggest that progress toward implementing human-like understanding in machines—machine understanding—may benefit from a naturalistic approach in which natural processes are modelled as closely as possible in mechanical substrates.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Frontiers in Systems Neuroscience
Frontiers in Systems Neuroscience Neuroscience-Developmental Neuroscience
CiteScore
6.00
自引率
3.30%
发文量
144
审稿时长
14 weeks
期刊介绍: Frontiers in Systems Neuroscience publishes rigorously peer-reviewed research that advances our understanding of whole systems of the brain, including those involved in sensation, movement, learning and memory, attention, reward, decision-making, reasoning, executive functions, and emotions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信