Is Lawful AI Ethical AI?

Mason Kortz, Jessica Fjeld, Hannah Hilligoss, Adam Nagy
{"title":"Is Lawful AI Ethical AI?","authors":"Mason Kortz, Jessica Fjeld, Hannah Hilligoss, Adam Nagy","doi":"10.5771/2747-5174-2022-1-60","DOIUrl":null,"url":null,"abstract":"Attempts to impose moral constraints on autonomous, artificial decision-making systems range from “human in the loop” requirements to specialized languages for machine-readable moral rules. Regardless of the approach, though, such proposals all face the challenge that moral standards are not universal. It is tempting to use lawfulness as a proxy for morality; unlike moral rules, laws are usually explicitly defined and recorded – and they are usually at least roughly compatible with local moral norms. However, lawfulness is a highly abstracted and, thus, imperfect substitute for morality, and it should be relied on only with appropriate caution. In this paper, we argue that law-abiding AI systems are a more achievable goal than moral ones. At the same time, we argue that it’s important to understand the multiple layers of abstraction, legal and algorithmic, that underlie even the simplest AI-enabled decisions. The ultimate output of such a system may be far removed from the original intention and may not comport with the moral principles to which it was meant to adhere. Therefore, caution is required lest we develop AI systems that are technically law-abiding but still enable amoral or immoral conduct.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Morals & Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5771/2747-5174-2022-1-60","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Attempts to impose moral constraints on autonomous, artificial decision-making systems range from “human in the loop” requirements to specialized languages for machine-readable moral rules. Regardless of the approach, though, such proposals all face the challenge that moral standards are not universal. It is tempting to use lawfulness as a proxy for morality; unlike moral rules, laws are usually explicitly defined and recorded – and they are usually at least roughly compatible with local moral norms. However, lawfulness is a highly abstracted and, thus, imperfect substitute for morality, and it should be relied on only with appropriate caution. In this paper, we argue that law-abiding AI systems are a more achievable goal than moral ones. At the same time, we argue that it’s important to understand the multiple layers of abstraction, legal and algorithmic, that underlie even the simplest AI-enabled decisions. The ultimate output of such a system may be far removed from the original intention and may not comport with the moral principles to which it was meant to adhere. Therefore, caution is required lest we develop AI systems that are technically law-abiding but still enable amoral or immoral conduct.
合法AI是道德AI吗?
对自主的人工决策系统施加道德约束的尝试包括从“人在循环中”的要求到机器可读的道德规则的专门语言。然而,无论采用何种方法,这些建议都面临着道德标准并非普遍适用的挑战。用合法性来代替道德是很诱人的;与道德规则不同,法律通常有明确的定义和记录——它们通常至少大致符合当地的道德规范。然而,合法性是一种高度抽象的东西,因此是道德的不完美替代品,只有在适当谨慎的情况下才能依赖它。在本文中,我们认为守法的人工智能系统比道德的人工智能系统更容易实现。与此同时,我们认为理解抽象的多层,法律和算法是很重要的,即使是最简单的人工智能决策也是如此。这种制度的最终结果可能与最初的意图相去甚远,也可能不符合它本来要坚持的道德原则。因此,我们必须谨慎行事,以免我们开发的人工智能系统在技术上遵纪守法,但仍然会导致不道德或不道德的行为。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信