人工智能能体现道德价值观吗?

Torben Swoboda, Lode Lauwaert
{"title":"人工智能能体现道德价值观吗?","authors":"Torben Swoboda,&nbsp;Lode Lauwaert","doi":"10.1007/s43681-025-00662-3","DOIUrl":null,"url":null,"abstract":"<div><p>The neutrality thesis holds that technology cannot be laden with values—it is inherently value-neutral. This long-standing view has faced critiques, but much of the argumentation against neutrality has focused on traditional, non-smart technologies like bridges and razors. In contrast, artificial intelligence (AI) is a smart technology increasingly used in high-stakes domains like healthcare, finance, and policing, where its decisions can cause moral harm. In this paper, we argue that AI, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis. Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm. We provide a conceptual framework discussing the neutrality thesis, values, and AI. Moreover, we examine two approaches to designing computational models of morality, artificial conscience and ethical prompting, and present empirical evidence from text-based game environments that artificial agents with such models exhibit more ethical behavior compared to agents without these models. The findings support that AI can embody moral values, which contradicts the claim that all technologies are necessarily value-neutral.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 4","pages":"3669 - 3680"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can artificial intelligence embody moral values?\",\"authors\":\"Torben Swoboda,&nbsp;Lode Lauwaert\",\"doi\":\"10.1007/s43681-025-00662-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The neutrality thesis holds that technology cannot be laden with values—it is inherently value-neutral. This long-standing view has faced critiques, but much of the argumentation against neutrality has focused on traditional, non-smart technologies like bridges and razors. In contrast, artificial intelligence (AI) is a smart technology increasingly used in high-stakes domains like healthcare, finance, and policing, where its decisions can cause moral harm. In this paper, we argue that AI, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis. Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm. We provide a conceptual framework discussing the neutrality thesis, values, and AI. Moreover, we examine two approaches to designing computational models of morality, artificial conscience and ethical prompting, and present empirical evidence from text-based game environments that artificial agents with such models exhibit more ethical behavior compared to agents without these models. The findings support that AI can embody moral values, which contradicts the claim that all technologies are necessarily value-neutral.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"5 4\",\"pages\":\"3669 - 3680\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-02-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-025-00662-3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-025-00662-3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

中立性理论认为,技术不能承载价值——它本质上是价值中立的。这种长期存在的观点受到了批评,但反对中立的大部分争论都集中在传统的、非智能的技术上,比如桥梁和剃须刀。相比之下,人工智能(AI)是一种智能技术,越来越多地用于医疗、金融和警务等高风险领域,在这些领域,它的决定可能会造成道德伤害。在本文中,我们认为人工智能,特别是那些自主做出决策以追求其目标的人工代理,挑战了中立性论点。我们的核心主张是,人工智能的计算模型可以整合道德价值观的表现,如公平、诚实和避免伤害。我们提供了一个讨论中立性论点、价值观和人工智能的概念框架。此外,我们研究了设计道德计算模型的两种方法,即人工良心和伦理提示,并从基于文本的游戏环境中提供了经验证据,表明与没有这些模型的代理相比,具有这些模型的人工代理表现出更多的道德行为。研究结果支持人工智能可以体现道德价值观,这与所有技术都必须是价值中立的说法相矛盾。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Can artificial intelligence embody moral values?

The neutrality thesis holds that technology cannot be laden with values—it is inherently value-neutral. This long-standing view has faced critiques, but much of the argumentation against neutrality has focused on traditional, non-smart technologies like bridges and razors. In contrast, artificial intelligence (AI) is a smart technology increasingly used in high-stakes domains like healthcare, finance, and policing, where its decisions can cause moral harm. In this paper, we argue that AI, particularly artificial agents that autonomously make decisions to pursue their goals, challenge the neutrality thesis. Our central claim is that the computational models underlying artificial agents can integrate representations of moral values such as fairness, honesty and avoiding harm. We provide a conceptual framework discussing the neutrality thesis, values, and AI. Moreover, we examine two approaches to designing computational models of morality, artificial conscience and ethical prompting, and present empirical evidence from text-based game environments that artificial agents with such models exhibit more ethical behavior compared to agents without these models. The findings support that AI can embody moral values, which contradicts the claim that all technologies are necessarily value-neutral.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信