{"title":"什么的大型模型?将工程成就误认为人类的语言能力","authors":"Abeba Birhane , Marek McGann","doi":"10.1016/j.langsci.2024.101672","DOIUrl":null,"url":null,"abstract":"<div><p>In this paper we argue that key, often sensational and misleading, claims regarding linguistic capabilities of Large Language Models (LLMs) are based on at least two unfounded assumptions: the <em>assumption of language completeness</em> and the <em>assumption of data completeness</em>. Language completeness assumes that a distinct and complete thing such as “a natural language” exists, the essential characteristics of which can be effectively and comprehensively modelled by an LLM. The assumption of data completeness relies on the belief that a language can be quantified and wholly captured by data. Work within the enactive approach to cognitive science makes clear that, rather than a distinct and complete thing, language is a means or way of acting. Languaging is not the kind of thing that can admit of a complete or comprehensive modelling. From an enactive perspective we identify three key characteristics of enacted language; <em>embodiment</em>, <em>participation</em>, and <em>precariousness</em>, that are absent in LLMs, and likely incompatible in principle with current architectures. We argue that these absences imply that LLMs are not now and cannot in their present form be linguistic agents the way humans are. We illustrate the point in particular through the phenomenon of “algospeak”, a recently described pattern of high-stakes human language activity in heavily controlled online environments. On the basis of these points, we conclude that sensational and misleading claims about LLM agency and capabilities emerge from a deep misconception of both what human language is and what LLMs are.</p></div>","PeriodicalId":51592,"journal":{"name":"Language Sciences","volume":"106 ","pages":"Article 101672"},"PeriodicalIF":1.7000,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0388000124000615/pdfft?md5=f7f3281a359df35af751aca63248e4e7&pid=1-s2.0-S0388000124000615-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Large models of what? Mistaking engineering achievements for human linguistic agency\",\"authors\":\"Abeba Birhane , Marek McGann\",\"doi\":\"10.1016/j.langsci.2024.101672\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>In this paper we argue that key, often sensational and misleading, claims regarding linguistic capabilities of Large Language Models (LLMs) are based on at least two unfounded assumptions: the <em>assumption of language completeness</em> and the <em>assumption of data completeness</em>. Language completeness assumes that a distinct and complete thing such as “a natural language” exists, the essential characteristics of which can be effectively and comprehensively modelled by an LLM. The assumption of data completeness relies on the belief that a language can be quantified and wholly captured by data. Work within the enactive approach to cognitive science makes clear that, rather than a distinct and complete thing, language is a means or way of acting. Languaging is not the kind of thing that can admit of a complete or comprehensive modelling. From an enactive perspective we identify three key characteristics of enacted language; <em>embodiment</em>, <em>participation</em>, and <em>precariousness</em>, that are absent in LLMs, and likely incompatible in principle with current architectures. We argue that these absences imply that LLMs are not now and cannot in their present form be linguistic agents the way humans are. We illustrate the point in particular through the phenomenon of “algospeak”, a recently described pattern of high-stakes human language activity in heavily controlled online environments. On the basis of these points, we conclude that sensational and misleading claims about LLM agency and capabilities emerge from a deep misconception of both what human language is and what LLMs are.</p></div>\",\"PeriodicalId\":51592,\"journal\":{\"name\":\"Language Sciences\",\"volume\":\"106 \",\"pages\":\"Article 101672\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-08-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0388000124000615/pdfft?md5=f7f3281a359df35af751aca63248e4e7&pid=1-s2.0-S0388000124000615-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language Sciences\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0388000124000615\",\"RegionNum\":2,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"EDUCATION & EDUCATIONAL RESEARCH\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language Sciences","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0388000124000615","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
Large models of what? Mistaking engineering achievements for human linguistic agency
In this paper we argue that key, often sensational and misleading, claims regarding linguistic capabilities of Large Language Models (LLMs) are based on at least two unfounded assumptions: the assumption of language completeness and the assumption of data completeness. Language completeness assumes that a distinct and complete thing such as “a natural language” exists, the essential characteristics of which can be effectively and comprehensively modelled by an LLM. The assumption of data completeness relies on the belief that a language can be quantified and wholly captured by data. Work within the enactive approach to cognitive science makes clear that, rather than a distinct and complete thing, language is a means or way of acting. Languaging is not the kind of thing that can admit of a complete or comprehensive modelling. From an enactive perspective we identify three key characteristics of enacted language; embodiment, participation, and precariousness, that are absent in LLMs, and likely incompatible in principle with current architectures. We argue that these absences imply that LLMs are not now and cannot in their present form be linguistic agents the way humans are. We illustrate the point in particular through the phenomenon of “algospeak”, a recently described pattern of high-stakes human language activity in heavily controlled online environments. On the basis of these points, we conclude that sensational and misleading claims about LLM agency and capabilities emerge from a deep misconception of both what human language is and what LLMs are.
期刊介绍:
Language Sciences is a forum for debate, conducted so as to be of interest to the widest possible audience, on conceptual and theoretical issues in the various branches of general linguistics. The journal is also concerned with bringing to linguists attention current thinking about language within disciplines other than linguistics itself; relevant contributions from anthropologists, philosophers, psychologists and sociologists, among others, will be warmly received. In addition, the Editor is particularly keen to encourage the submission of essays on topics in the history and philosophy of language studies, and review articles discussing the import of significant recent works on language and linguistics.