Unlearning Descartes: Sentient AI is a Political Problem

Gordon Hull
{"title":"Unlearning Descartes: Sentient AI is a Political Problem","authors":"Gordon Hull","doi":"10.23919/JSC.2023.0020","DOIUrl":null,"url":null,"abstract":"The emergence of Large Language Models (LLMs) has renewed debate about whether Artificial Intelligence (AI) can be conscious or sentient. This paper identifies two approaches to the topic and argues: (1) A “Cartesian” approach treats consciousness, sentience, and personhood as very similar terms, and treats language use as evidence that an entity is conscious. This approach, which has been dominant in AI research, is primarily interested in what consciousness is, and whether an entity possesses it. (2) An alternative “Hobbesian” approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious. This both enables a political disambiguation of language, consciousness, and personhood and allows regulation to proceed in the face of intractable problems in deciding if something “really is” sentient. (3) AI systems should not be treated as conscious, for at least two reasons: (a) treating the system as an origin point tends to mask competing interests in creating it, at the expense of the most vulnerable people involved; and (b) it will tend to hinder efforts at holding someone accountable for the behavior of the systems. A major objective of this paper is accordingly to encourage a shift in thinking. In place of the Cartesian question—is AI sentient?—I propose that we confront the more Hobbesian one: Does it make sense to regulate developments in which AI systems behave as if they were sentient?","PeriodicalId":67535,"journal":{"name":"社会计算(英文)","volume":"4 3","pages":"193-204"},"PeriodicalIF":0.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10375936","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"社会计算(英文)","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10375936/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The emergence of Large Language Models (LLMs) has renewed debate about whether Artificial Intelligence (AI) can be conscious or sentient. This paper identifies two approaches to the topic and argues: (1) A “Cartesian” approach treats consciousness, sentience, and personhood as very similar terms, and treats language use as evidence that an entity is conscious. This approach, which has been dominant in AI research, is primarily interested in what consciousness is, and whether an entity possesses it. (2) An alternative “Hobbesian” approach treats consciousness as a sociopolitical issue and is concerned with what the implications are for labeling something sentient or conscious. This both enables a political disambiguation of language, consciousness, and personhood and allows regulation to proceed in the face of intractable problems in deciding if something “really is” sentient. (3) AI systems should not be treated as conscious, for at least two reasons: (a) treating the system as an origin point tends to mask competing interests in creating it, at the expense of the most vulnerable people involved; and (b) it will tend to hinder efforts at holding someone accountable for the behavior of the systems. A major objective of this paper is accordingly to encourage a shift in thinking. In place of the Cartesian question—is AI sentient?—I propose that we confront the more Hobbesian one: Does it make sense to regulate developments in which AI systems behave as if they were sentient?
不学笛卡尔:智能人工智能是个政治问题
大型语言模型(LLM)的出现再次引发了关于人工智能(AI)是否有意识或有知觉的争论。本文就这一话题提出了两种方法,并认为:(1)"笛卡尔 "方法将意识、智商和人格视为非常相似的术语,并将语言使用视为实体有意识的证据。这种方法在人工智能研究中占主导地位,主要关注意识是什么以及实体是否拥有意识。(2) 另一种 "霍布斯式 "方法将意识视为一个社会政治问题,关注的是给有知觉或有意识的事物贴上标签会产生什么影响。这既可以从政治角度对语言、意识和人格进行区分,又可以在面临难以解决的问题时进行监管,以确定某物是否 "真的 "有知觉。(3) 不应将人工智能系统视为有意识的,至少有两个原因:(a) 将系统视为一个原点,往往会掩盖创造系统过程中的利益竞争,从而牺牲最脆弱的相关人员的利益;(b) 这往往会阻碍追究系统行为责任的努力。因此,本文的一个主要目的是鼓励转变思维方式。代替笛卡尔式的问题--人工智能是否有知觉--我建议我们面对更霍布斯式的问题:如果人工智能系统的行为被认为是有知觉的,那么对其发展进行监管是否有意义?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信