设计有资格的聊天机器人的审慎理由:机器人的“权利”如何改善人类的福祉。

Guido Löhr, Matthew Dennis
{"title":"设计有资格的聊天机器人的审慎理由:机器人的“权利”如何改善人类的福祉。","authors":"Guido Löhr,&nbsp;Matthew Dennis","doi":"10.1007/s43681-025-00676-x","DOIUrl":null,"url":null,"abstract":"<div><p>Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3791 - 3802"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12208972/pdf/","citationCount":"0","resultStr":"{\"title\":\"Prudential reasons for designing entitled chatbots: How robot \\\"rights\\\" can improve human well-being\",\"authors\":\"Guido Löhr,&nbsp;Matthew Dennis\",\"doi\":\"10.1007/s43681-025-00676-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 4\",\"pages\":\"3791 - 3802\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12208972/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00676-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00676-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

机器人或聊天机器人能成为道德病人吗?机器人权利的问题通常与道德原因有关,比如预防原则或承受痛苦的能力。我们认为,我们有审慎的理由制造机器人,至少可以让我们对自己负责(批评我们等),我们有审慎的理由制造机器人,要求我们尊重它们。该提案旨在通过回答一个关键问题,为机器人权利的争论增添细微差别:为什么我们首先要制造具有权利的机器人?我们认为,在我们的社会关系中,某种程度的责任感有助于我们的幸福和繁荣。赋予机器人的规范性将增加它们的社会和非社会功能,从行动协调到更有意义的关系。拥有一个具有一定“地位”的机器人来让我们负责,可以提高我们的认知地位,满足我们对认可的渴望。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Prudential reasons for designing entitled chatbots: How robot "rights" can improve human well-being

Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信