Stoking fears of AI X-Risk (while forgetting justice here and now)

IF 3.3 2区 哲学 Q1 ETHICS
Nancy S Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky, Anita Ho
{"title":"Stoking fears of AI X-Risk (while forgetting justice here and now)","authors":"Nancy S Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky, Anita Ho","doi":"10.1136/jme-2024-110402","DOIUrl":null,"url":null,"abstract":"We appreciate the helpful commentaries on our paper, ‘AI and the falling sky: interrogating X-Risk’.1 We agree with many points commentators raise, which opened our eyes to concerns we had not previously considered. This reply focuses on the tension many commentators noted between AI’s existential risks (X-Risks) and justice here and now. In ‘Existential risk and the justice turn in bioethics’, Corsico frames the tension between AI X-Risk and justice here and now as part of a larger shift within bioethics.2 They think the field is increasingly turning away from ‘big picture’ questions new technologies raise and focusing on narrower justice concerns of less significance. They compare our paper’s emphasis on justly transitioning to more AI-centred societies with the approach of environmentalists fretting about human protection against climate change while losing sight of the need to protect the planet and all living things. Corsico doubts there is much point to pressing for ‘justice on a dead planet’, so too Corsico questions our concern with just transitions. Presumably, that matters little if intelligent life on Earth is destroyed. Corsico’s recommends bioethicists return to big questions, like: ‘Should we develop AI at all, given AI X-Risk?’. Yet this question is increasingly moot. The genie is already out of the bottle . The Future of Life Institute’s 2023 call for a temporary pause on training AI systems more powerful than ChatGPT fell on deaf ears, in part because freely accessible source codes enable anyone to train systems and create AI applications.3 The focus now must be managing the genie. While we take AI X-Risk seriously, …","PeriodicalId":16317,"journal":{"name":"Journal of Medical Ethics","volume":"15 1","pages":""},"PeriodicalIF":3.3000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medical Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1136/jme-2024-110402","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

Abstract

We appreciate the helpful commentaries on our paper, ‘AI and the falling sky: interrogating X-Risk’.1 We agree with many points commentators raise, which opened our eyes to concerns we had not previously considered. This reply focuses on the tension many commentators noted between AI’s existential risks (X-Risks) and justice here and now. In ‘Existential risk and the justice turn in bioethics’, Corsico frames the tension between AI X-Risk and justice here and now as part of a larger shift within bioethics.2 They think the field is increasingly turning away from ‘big picture’ questions new technologies raise and focusing on narrower justice concerns of less significance. They compare our paper’s emphasis on justly transitioning to more AI-centred societies with the approach of environmentalists fretting about human protection against climate change while losing sight of the need to protect the planet and all living things. Corsico doubts there is much point to pressing for ‘justice on a dead planet’, so too Corsico questions our concern with just transitions. Presumably, that matters little if intelligent life on Earth is destroyed. Corsico’s recommends bioethicists return to big questions, like: ‘Should we develop AI at all, given AI X-Risk?’. Yet this question is increasingly moot. The genie is already out of the bottle . The Future of Life Institute’s 2023 call for a temporary pause on training AI systems more powerful than ChatGPT fell on deaf ears, in part because freely accessible source codes enable anyone to train systems and create AI applications.3 The focus now must be managing the genie. While we take AI X-Risk seriously, …
煽动对人工智能 X 风险的恐惧(却忘记了此时此地的正义)
1 我们同意评论者提出的许多观点,这些观点让我们看到了我们之前未曾考虑过的问题。1 我们同意评论者提出的许多观点,这些观点让我们看到了我们以前未曾考虑过的问题。本回复的重点是许多评论者指出的人工智能的生存风险(X-Risks)与此时此地的正义之间的紧张关系。在《生命伦理学中的生存风险与正义转向》一文中,Corsico 将人工智能 X 风险与此时此地的正义之间的矛盾归结为生命伦理学中更大转变的一部分2 。他们将我们的论文所强调的向更加以人工智能为中心的社会的公正过渡与环保主义者的做法相提并论,环保主义者焦虑于保护人类免受气候变化的影响,却忽视了保护地球和所有生物的必要性。科西科怀疑 "在一个死亡的星球上追求正义 "是否有意义,同样,科西科也质疑我们对公正过渡的关注。据推测,如果地球上的智慧生命被毁灭,那么这一点也就无关紧要了。科西科建议生命伦理学家回到一些重大问题上来,比如:"我们是否应该发展人工智能?鉴于人工智能的X风险,我们是否应该发展人工智能?然而,这个问题越来越没有实际意义。精灵已经从瓶子里出来了......"。未来生命研究所(Future of Life Institute)在 2023 年呼吁暂时停止训练比 ChatGPT 更强大的人工智能系统,但这一呼吁被置若罔闻,部分原因是任何人都可以免费获取源代码来训练系统和创建人工智能应用。虽然我们认真对待人工智能 X 风险,但......
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Medical Ethics
Journal of Medical Ethics 医学-医学:伦理
CiteScore
7.80
自引率
9.80%
发文量
164
审稿时长
4-8 weeks
期刊介绍: Journal of Medical Ethics is a leading international journal that reflects the whole field of medical ethics. The journal seeks to promote ethical reflection and conduct in scientific research and medical practice. It features articles on various ethical aspects of health care relevant to health care professionals, members of clinical ethics committees, medical ethics professionals, researchers and bioscientists, policy makers and patients. Subscribers to the Journal of Medical Ethics also receive Medical Humanities journal at no extra cost. JME is the official journal of the Institute of Medical Ethics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信