{"title":"设计有资格的聊天机器人的审慎理由:机器人的“权利”如何改善人类的福祉。","authors":"Guido Löhr, Matthew Dennis","doi":"10.1007/s43681-025-00676-x","DOIUrl":null,"url":null,"abstract":"<div><p>Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3791 - 3802"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12208972/pdf/","citationCount":"0","resultStr":"{\"title\":\"Prudential reasons for designing entitled chatbots: How robot \\\"rights\\\" can improve human well-being\",\"authors\":\"Guido Löhr, Matthew Dennis\",\"doi\":\"10.1007/s43681-025-00676-x\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 4\",\"pages\":\"3791 - 3802\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12208972/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00676-x\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00676-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Prudential reasons for designing entitled chatbots: How robot "rights" can improve human well-being
Can robots or chatbots be moral patients? The question of robot rights is often linked to moral reasons like precautionary principles or the ability to suffer. We argue that we have prudential reasons for building robots that can at least hold us accountable (criticize us etc.) and that we have prudential reasons to build robots that can demand that we treat them with respect. This proposal aims to add nuance to the robot rights debate by answering a key question: Why should we want to build robots that could have rights in the first place? We argue that some degree of accountability in our social relationships contributes to our well-being and flourishing. The normativity ascribed to robots will increase their social and non-social functionalities from action coordination to more meaningful relationships. Having a robot that has a certain “standing” to hold us accountable can improve our epistemic standing and satisfy our desire for recognition.