ChatGPT是否具有语义理解?发生策略的统计问题

IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Lisa Miracchi Titus
{"title":"ChatGPT是否具有语义理解?发生策略的统计问题","authors":"Lisa Miracchi Titus","doi":"10.1016/j.cogsys.2023.101174","DOIUrl":null,"url":null,"abstract":"<div><p><span>Over the last decade, AI models of language and word meaning have been dominated by what we might call a </span><em>statistics-of-occurrence</em><span>, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to what a human might produce (</span><em>meaning-semblant behavior</em><span>). Examples of what we can call Statistics-of-Occurrence Models (SOMs) include: Word2Vec (CBOW and Skip-Gram), BERT, GPT-3, and, most recently, ChatGPT. Increasingly, there have been suggestions that such systems have semantic understanding, or at least a proto-version of it. This paper argues against such claims. I argue that a necessary condition for a system to possess semantic understanding is that it function in ways that are causally explainable by appeal to its semantic properties. I then argue that SOMs do not plausibly satisfy this </span><em>Functioning Criterion</em>. Rather, the best explanation of their meaning-semblant behavior is what I call the <em>Statistical Hypothesis</em><span>: SOMs do not themselves function to represent or produce meaningful text; they just reflect the semantic information that exists in the aggregate given strong correlations between word placement and meaningful use. I consider and rebut three main responses to the claim that SOMs fail to meet the Functioning Criterion. The result, I hope, is increased clarity about </span><em>why</em> and <em>how</em> one should make claims about AI systems having semantic understanding.</p></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy\",\"authors\":\"Lisa Miracchi Titus\",\"doi\":\"10.1016/j.cogsys.2023.101174\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>Over the last decade, AI models of language and word meaning have been dominated by what we might call a </span><em>statistics-of-occurrence</em><span>, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to what a human might produce (</span><em>meaning-semblant behavior</em><span>). Examples of what we can call Statistics-of-Occurrence Models (SOMs) include: Word2Vec (CBOW and Skip-Gram), BERT, GPT-3, and, most recently, ChatGPT. Increasingly, there have been suggestions that such systems have semantic understanding, or at least a proto-version of it. This paper argues against such claims. I argue that a necessary condition for a system to possess semantic understanding is that it function in ways that are causally explainable by appeal to its semantic properties. I then argue that SOMs do not plausibly satisfy this </span><em>Functioning Criterion</em>. Rather, the best explanation of their meaning-semblant behavior is what I call the <em>Statistical Hypothesis</em><span>: SOMs do not themselves function to represent or produce meaningful text; they just reflect the semantic information that exists in the aggregate given strong correlations between word placement and meaningful use. I consider and rebut three main responses to the claim that SOMs fail to meet the Functioning Criterion. The result, I hope, is increased clarity about </span><em>why</em> and <em>how</em> one should make claims about AI systems having semantic understanding.</p></div>\",\"PeriodicalId\":55242,\"journal\":{\"name\":\"Cognitive Systems Research\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":2.1000,\"publicationDate\":\"2023-09-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Systems Research\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1389041723001080\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041723001080","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

摘要

在过去的十年里,语言和词义的人工智能模型一直被我们所说的发生统计所主导,策略:这些模型是在大量未标记文本上训练的深度神经网络结构,目的是生成一个模型,利用单词和短语共现的统计信息,生成类似于人类可能产生的行为,或者可以被探测以表现出与人类可能产生的行为相似的行为(意思是相似的行为)的表示。我们可以称之为发生统计模型(SOM)的例子包括:Word2Verc(CBOW和Skip Gram)、BERT、GPT-3,以及最近的ChatGPT。越来越多的人认为这种系统具有语义理解,或者至少是它的原型。本文反对这种说法。我认为,一个系统拥有语义理解的必要条件是,它的功能可以通过诉诸其语义属性来解释。然后,我认为SOM似乎不满足这个功能标准。相反,对它们的意义相似行为的最好解释是我所说的统计假说:SOM本身并不能代表或产生有意义的文本;它们只是反映了在单词放置和有意义的使用之间有很强相关性的情况下存在于集合中的语义信息。我考虑并反驳了对SOM不符合功能标准的说法的三个主要回应。我希望,结果是,人们应该更加清楚地说明为什么以及如何声称人工智能系统具有语义理解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy

Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to what a human might produce (meaning-semblant behavior). Examples of what we can call Statistics-of-Occurrence Models (SOMs) include: Word2Vec (CBOW and Skip-Gram), BERT, GPT-3, and, most recently, ChatGPT. Increasingly, there have been suggestions that such systems have semantic understanding, or at least a proto-version of it. This paper argues against such claims. I argue that a necessary condition for a system to possess semantic understanding is that it function in ways that are causally explainable by appeal to its semantic properties. I then argue that SOMs do not plausibly satisfy this Functioning Criterion. Rather, the best explanation of their meaning-semblant behavior is what I call the Statistical Hypothesis: SOMs do not themselves function to represent or produce meaningful text; they just reflect the semantic information that exists in the aggregate given strong correlations between word placement and meaningful use. I consider and rebut three main responses to the claim that SOMs fail to meet the Functioning Criterion. The result, I hope, is increased clarity about why and how one should make claims about AI systems having semantic understanding.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Cognitive Systems Research
Cognitive Systems Research 工程技术-计算机:人工智能
CiteScore
9.40
自引率
5.10%
发文量
40
审稿时长
>12 weeks
期刊介绍: Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial. The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition. Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信