{"title":"机器人博士应该具有道德同情心吗?","authors":"Elisabetta Sirgiovanni Ph.D.","doi":"10.1111/bioe.13345","DOIUrl":null,"url":null,"abstract":"<p>Critics of clinical artificial intelligence (AI) suggest that the technology is ethically harmful because it may lead to the dehumanization of the doctor–patient relationship (DPR) by eliminating moral empathy, which is viewed as a distinctively human trait. The benefits of clinical empathy—that is, moral empathy applied in the clinical context—are widely praised, but this praise is often unquestioning and lacks context. In this article, I will argue that criticisms of clinical AI based on appeals to empathy are misplaced. As psychological and philosophical research has shown, empathy leads to certain types of biased reasoning and choices. These biases of empathy consistently impact the DPR. Empathy may lead to partial judgments and asymmetric DPRs, as well as disparities in the treatment of patients, undermining respect for patient autonomy and equality. Engineers should consider the flaws of empathy when designing affective artificial systems in the future. The nature of sympathy and compassion (i.e., displaying emotional concern while maintaining some balanced distance) has been defended by some ethicists as more beneficial than perspective-taking in the clinical context. However, these claims do not seem to have impacted the AI debate. Thus, this article will also argue that if machines are programmed for affective behavior, they should also be given some ethical scaffolding.</p>","PeriodicalId":55379,"journal":{"name":"Bioethics","volume":"39 1","pages":"98-107"},"PeriodicalIF":1.7000,"publicationDate":"2024-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bioe.13345","citationCount":"0","resultStr":"{\"title\":\"Should Doctor Robot possess moral empathy?\",\"authors\":\"Elisabetta Sirgiovanni Ph.D.\",\"doi\":\"10.1111/bioe.13345\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Critics of clinical artificial intelligence (AI) suggest that the technology is ethically harmful because it may lead to the dehumanization of the doctor–patient relationship (DPR) by eliminating moral empathy, which is viewed as a distinctively human trait. The benefits of clinical empathy—that is, moral empathy applied in the clinical context—are widely praised, but this praise is often unquestioning and lacks context. In this article, I will argue that criticisms of clinical AI based on appeals to empathy are misplaced. As psychological and philosophical research has shown, empathy leads to certain types of biased reasoning and choices. These biases of empathy consistently impact the DPR. Empathy may lead to partial judgments and asymmetric DPRs, as well as disparities in the treatment of patients, undermining respect for patient autonomy and equality. Engineers should consider the flaws of empathy when designing affective artificial systems in the future. The nature of sympathy and compassion (i.e., displaying emotional concern while maintaining some balanced distance) has been defended by some ethicists as more beneficial than perspective-taking in the clinical context. However, these claims do not seem to have impacted the AI debate. Thus, this article will also argue that if machines are programmed for affective behavior, they should also be given some ethical scaffolding.</p>\",\"PeriodicalId\":55379,\"journal\":{\"name\":\"Bioethics\",\"volume\":\"39 1\",\"pages\":\"98-107\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2024-08-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/bioe.13345\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Bioethics\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/bioe.13345\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Bioethics","FirstCategoryId":"98","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/bioe.13345","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ETHICS","Score":null,"Total":0}
Critics of clinical artificial intelligence (AI) suggest that the technology is ethically harmful because it may lead to the dehumanization of the doctor–patient relationship (DPR) by eliminating moral empathy, which is viewed as a distinctively human trait. The benefits of clinical empathy—that is, moral empathy applied in the clinical context—are widely praised, but this praise is often unquestioning and lacks context. In this article, I will argue that criticisms of clinical AI based on appeals to empathy are misplaced. As psychological and philosophical research has shown, empathy leads to certain types of biased reasoning and choices. These biases of empathy consistently impact the DPR. Empathy may lead to partial judgments and asymmetric DPRs, as well as disparities in the treatment of patients, undermining respect for patient autonomy and equality. Engineers should consider the flaws of empathy when designing affective artificial systems in the future. The nature of sympathy and compassion (i.e., displaying emotional concern while maintaining some balanced distance) has been defended by some ethicists as more beneficial than perspective-taking in the clinical context. However, these claims do not seem to have impacted the AI debate. Thus, this article will also argue that if machines are programmed for affective behavior, they should also be given some ethical scaffolding.
期刊介绍:
As medical technology continues to develop, the subject of bioethics has an ever increasing practical relevance for all those working in philosophy, medicine, law, sociology, public policy, education and related fields.
Bioethics provides a forum for well-argued articles on the ethical questions raised by current issues such as: international collaborative clinical research in developing countries; public health; infectious disease; AIDS; managed care; genomics and stem cell research. These questions are considered in relation to concrete ethical, legal and policy problems, or in terms of the fundamental concepts, principles and theories used in discussions of such problems.
Bioethics also features regular Background Briefings on important current debates in the field. These feature articles provide excellent material for bioethics scholars, teachers and students alike.