{"title":"Does ChatGPT have semantic understanding? A problem with the statistics-of-occurrence strategy","authors":"Lisa Miracchi Titus","doi":"10.1016/j.cogsys.2023.101174","DOIUrl":null,"url":null,"abstract":"<div><p><span>Over the last decade, AI models of language and word meaning have been dominated by what we might call a </span><em>statistics-of-occurrence</em><span>, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to what a human might produce (</span><em>meaning-semblant behavior</em><span>). Examples of what we can call Statistics-of-Occurrence Models (SOMs) include: Word2Vec (CBOW and Skip-Gram), BERT, GPT-3, and, most recently, ChatGPT. Increasingly, there have been suggestions that such systems have semantic understanding, or at least a proto-version of it. This paper argues against such claims. I argue that a necessary condition for a system to possess semantic understanding is that it function in ways that are causally explainable by appeal to its semantic properties. I then argue that SOMs do not plausibly satisfy this </span><em>Functioning Criterion</em>. Rather, the best explanation of their meaning-semblant behavior is what I call the <em>Statistical Hypothesis</em><span>: SOMs do not themselves function to represent or produce meaningful text; they just reflect the semantic information that exists in the aggregate given strong correlations between word placement and meaningful use. I consider and rebut three main responses to the claim that SOMs fail to meet the Functioning Criterion. The result, I hope, is increased clarity about </span><em>why</em> and <em>how</em> one should make claims about AI systems having semantic understanding.</p></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"83 ","pages":"Article 101174"},"PeriodicalIF":2.1000,"publicationDate":"2023-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Systems Research","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1389041723001080","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to what a human might produce (meaning-semblant behavior). Examples of what we can call Statistics-of-Occurrence Models (SOMs) include: Word2Vec (CBOW and Skip-Gram), BERT, GPT-3, and, most recently, ChatGPT. Increasingly, there have been suggestions that such systems have semantic understanding, or at least a proto-version of it. This paper argues against such claims. I argue that a necessary condition for a system to possess semantic understanding is that it function in ways that are causally explainable by appeal to its semantic properties. I then argue that SOMs do not plausibly satisfy this Functioning Criterion. Rather, the best explanation of their meaning-semblant behavior is what I call the Statistical Hypothesis: SOMs do not themselves function to represent or produce meaningful text; they just reflect the semantic information that exists in the aggregate given strong correlations between word placement and meaningful use. I consider and rebut three main responses to the claim that SOMs fail to meet the Functioning Criterion. The result, I hope, is increased clarity about why and how one should make claims about AI systems having semantic understanding.
期刊介绍:
Cognitive Systems Research is dedicated to the study of human-level cognition. As such, it welcomes papers which advance the understanding, design and applications of cognitive and intelligent systems, both natural and artificial.
The journal brings together a broad community studying cognition in its many facets in vivo and in silico, across the developmental spectrum, focusing on individual capacities or on entire architectures. It aims to foster debate and integrate ideas, concepts, constructs, theories, models and techniques from across different disciplines and different perspectives on human-level cognition. The scope of interest includes the study of cognitive capacities and architectures - both brain-inspired and non-brain-inspired - and the application of cognitive systems to real-world problems as far as it offers insights relevant for the understanding of cognition.
Cognitive Systems Research therefore welcomes mature and cutting-edge research approaching cognition from a systems-oriented perspective, both theoretical and empirically-informed, in the form of original manuscripts, short communications, opinion articles, systematic reviews, and topical survey articles from the fields of Cognitive Science (including Philosophy of Cognitive Science), Artificial Intelligence/Computer Science, Cognitive Robotics, Developmental Science, Psychology, and Neuroscience and Neuromorphic Engineering. Empirical studies will be considered if they are supplemented by theoretical analyses and contributions to theory development and/or computational modelling studies.