关于废话,大型语言模型,以及需要抑制你的热情

Daniel W. Tigard
{"title":"关于废话,大型语言模型,以及需要抑制你的热情","authors":"Daniel W. Tigard","doi":"10.1007/s43681-025-00743-3","DOIUrl":null,"url":null,"abstract":"<div><p>Amidst all the hype around artificial intelligence (AI), particularly regarding large language models (LLMs), generative AI and chatbots like ChatGPT, a surge of headlines is instilling caution and even explicitly calling “bullshit” on such technologies. Should we follow suit? What exactly does it mean to call bullshit on an AI program? When is doing so a good idea, and when might it not be? With this paper, I aim to provide a brief guide on how to call bullshit on ChatGPT and related systems. In short, one must understand the basic nature of LLMs, how they function and what they produce, and one must recognize bullshit. I appeal to the prominent work of the late Harry Frankfurt and suggest that recent accounts jump too quickly to the conclusion that LLMs are bullshitting. In doing so, I offer a more level-headed approach to calling bullshit, and accordingly, a way of navigating some of the recent critiques of generative AI systems.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 5","pages":"4863 - 4873"},"PeriodicalIF":0.0000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00743-3.pdf","citationCount":"0","resultStr":"{\"title\":\"On bullshit, large language models, and the need to curb your enthusiasm\",\"authors\":\"Daniel W. Tigard\",\"doi\":\"10.1007/s43681-025-00743-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Amidst all the hype around artificial intelligence (AI), particularly regarding large language models (LLMs), generative AI and chatbots like ChatGPT, a surge of headlines is instilling caution and even explicitly calling “bullshit” on such technologies. Should we follow suit? What exactly does it mean to call bullshit on an AI program? When is doing so a good idea, and when might it not be? With this paper, I aim to provide a brief guide on how to call bullshit on ChatGPT and related systems. In short, one must understand the basic nature of LLMs, how they function and what they produce, and one must recognize bullshit. I appeal to the prominent work of the late Harry Frankfurt and suggest that recent accounts jump too quickly to the conclusion that LLMs are bullshitting. In doing so, I offer a more level-headed approach to calling bullshit, and accordingly, a way of navigating some of the recent critiques of generative AI systems.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 5\",\"pages\":\"4863 - 4873\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-05-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://link.springer.com/content/pdf/10.1007/s43681-025-00743-3.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00743-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00743-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在围绕人工智能(AI),特别是大型语言模型(llm)、生成式人工智能和聊天机器人(如ChatGPT)的所有炒作中,大量的头条新闻正在向人们灌输谨慎的态度,甚至明确地称这些技术是“胡说八道”。我们应该效仿吗?说人工智能程序是扯淡到底是什么意思?什么时候这样做是一个好主意,什么时候可能不是?在本文中,我的目标是提供一个关于如何在ChatGPT和相关系统上调用胡扯的简要指南。简而言之,人们必须了解法学硕士的基本性质,它们是如何运作的,它们产生了什么,人们必须认识到屁话。我引用了已故的哈里•法兰克福(Harry Frankfurt)的杰出著作,认为最近的报道过于草率地得出法学硕士都在胡扯的结论。在此过程中,我提供了一种更冷静的方法来称呼胡扯,相应地,这也是一种引导最近对生成AI系统的一些批评的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On bullshit, large language models, and the need to curb your enthusiasm

Amidst all the hype around artificial intelligence (AI), particularly regarding large language models (LLMs), generative AI and chatbots like ChatGPT, a surge of headlines is instilling caution and even explicitly calling “bullshit” on such technologies. Should we follow suit? What exactly does it mean to call bullshit on an AI program? When is doing so a good idea, and when might it not be? With this paper, I aim to provide a brief guide on how to call bullshit on ChatGPT and related systems. In short, one must understand the basic nature of LLMs, how they function and what they produce, and one must recognize bullshit. I appeal to the prominent work of the late Harry Frankfurt and suggest that recent accounts jump too quickly to the conclusion that LLMs are bullshitting. In doing so, I offer a more level-headed approach to calling bullshit, and accordingly, a way of navigating some of the recent critiques of generative AI systems.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信