Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.

IF 2.7 2区 哲学 Q1 ENGINEERING, MULTIDISCIPLINARY
Salla Westerstrand
{"title":"Reconstructing AI Ethics Principles: Rawlsian Ethics of Artificial Intelligence.","authors":"Salla Westerstrand","doi":"10.1007/s11948-024-00507-y","DOIUrl":null,"url":null,"abstract":"<p><p>The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 5","pages":"46"},"PeriodicalIF":2.7000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11464555/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science and Engineering Ethics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11948-024-00507-y","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0

Abstract

The popularisation of Artificial Intelligence (AI) technologies has sparked discussion about their ethical implications. This development has forced governmental organisations, NGOs, and private companies to react and draft ethics guidelines for future development of ethical AI systems. Whereas many ethics guidelines address values familiar to ethicists, they seem to lack in ethical justifications. Furthermore, most tend to neglect the impact of AI on democracy, governance, and public deliberation. Existing research suggest, however, that AI can threaten key elements of western democracies that are ethically relevant. In this paper, Rawls's theory of justice is applied to draft a set of guidelines for organisations and policy-makers to guide AI development towards a more ethical direction. The goal is to contribute to the broadening of the discussion on AI ethics by exploring the possibility of constructing AI ethics guidelines that are philosophically justified and take a broader perspective of societal justice. The paper discusses how Rawls's theory of justice as fairness and its key concepts relate to the ongoing developments in AI ethics and gives a proposition of how principles that offer a foundation for operationalising AI ethics in practice could look like if aligned with Rawls's theory of justice as fairness.

重构人工智能伦理原则:罗尔斯人工智能伦理学。
人工智能(AI)技术的普及引发了有关其伦理影响的讨论。这一发展迫使政府组织、非政府组织和私营公司做出反应,为未来开发符合伦理的人工智能系统起草伦理指南。虽然许多伦理指南都涉及伦理学家所熟悉的价值观,但它们似乎缺乏伦理依据。此外,大多数准则往往忽视人工智能对民主、治理和公共审议的影响。然而,现有研究表明,人工智能会威胁到西方民主制度中与伦理相关的关键要素。本文运用罗尔斯的正义理论,为组织和政策制定者起草了一套指导方针,以引导人工智能朝着更加合乎伦理的方向发展。本文的目的是通过探索构建人工智能伦理准则的可能性,从哲学上证明其合理性,并从更广泛的社会正义角度出发,为扩大人工智能伦理讨论做出贡献。本文讨论了罗尔斯的正义即公平理论及其关键概念与当前人工智能伦理发展的关系,并提出了一个命题,即如果与罗尔斯的正义即公平理论保持一致,那么为人工智能伦理在实践中的可操作性提供基础的原则会是怎样的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Science and Engineering Ethics
Science and Engineering Ethics 综合性期刊-工程:综合
CiteScore
10.70
自引率
5.40%
发文量
54
审稿时长
>12 weeks
期刊介绍: Science and Engineering Ethics is an international multidisciplinary journal dedicated to exploring ethical issues associated with science and engineering, covering professional education, research and practice as well as the effects of technological innovations and research findings on society. While the focus of this journal is on science and engineering, contributions from a broad range of disciplines, including social sciences and humanities, are welcomed. Areas of interest include, but are not limited to, ethics of new and emerging technologies, research ethics, computer ethics, energy ethics, animals and human subjects ethics, ethics education in science and engineering, ethics in design, biomedical ethics, values in technology and innovation. We welcome contributions that deal with these issues from an international perspective, particularly from countries that are underrepresented in these discussions.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信