A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT

IF 0.3 0 PHILOSOPHY
Reto Gubelmann
{"title":"A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT","authors":"Reto Gubelmann","doi":"10.1163/18756735-00000182","DOIUrl":null,"url":null,"abstract":"\nIn this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being fulfills these criteria. On the conceptual side, the article builds on Glock’s concept of intelligence, Taylor’s conception of intrinsic rightness as well as Wittgenstein’s rule-following considerations. On the empirical side, it is argued that current transformer-based NNLP models, such as BERT and GPT-3 come close to fulfilling these criteria.","PeriodicalId":43873,"journal":{"name":"Grazer Philosophische Studien-International Journal for Analytic Philosophy","volume":null,"pages":null},"PeriodicalIF":0.3000,"publicationDate":"2023-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Grazer Philosophische Studien-International Journal for Analytic Philosophy","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1163/18756735-00000182","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"PHILOSOPHY","Score":null,"Total":0}
引用次数: 1

Abstract

In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being fulfills these criteria. On the conceptual side, the article builds on Glock’s concept of intelligence, Taylor’s conception of intrinsic rightness as well as Wittgenstein’s rule-following considerations. On the empirical side, it is argued that current transformer-based NNLP models, such as BERT and GPT-3 come close to fulfilling these criteria.
大型语言模型如BERT、GPT-3和ChatGPT的语言理解的松散维特根斯坦概念
在这篇文章中,我提出了一个维特根斯坦式的概念,即包括人工智能系统在内的生物需要什么才能理解语言,我认为目前的艺术系统比人们想象的更接近于满足这些要求。发展和捍卫这一主张既有经验方面的,也有概念方面的。概念方面涉及在判断某些生物是否理解语言时合理应用的标准;经验方面关注的问题是一个给定的存在是否满足这些标准。在概念方面,本文以格洛克的智力概念、泰勒的内在正确性概念以及维特根斯坦的规则遵循思想为基础。在经验方面,有人认为基于电流互感器的NNLP模型,如BERT和GPT-3接近满足这些标准。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
1.40
自引率
0.00%
发文量
11
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信