Moral decision-making in AI: A comprehensive review and recommendations

IF 13.3 1区 管理学 Q1 BUSINESS
Jiwat Ram
{"title":"Moral decision-making in AI: A comprehensive review and recommendations","authors":"Jiwat Ram","doi":"10.1016/j.techfore.2025.124150","DOIUrl":null,"url":null,"abstract":"<div><div>The increased reliance on artificial intelligence (AI) systems for decision-making has raised corresponding concerns about the morality of such decisions. However, knowledge on the subject remains fragmentary, and cogent understanding is lacking. This study addresses the gap by using Templier and Paré's (2015) six-step framework to perform a systematic literature review on moral decision-making by AI systems. A data sample of 494 articles was analysed to filter 280 articles for content analysis. Key findings are as follows: (1) Building moral decision-making capabilities in AI systems faces a variety of challenges relating to human decision-making, technology, ethics and values. The absence of consensus on what constitutes moral decision-making and the absence of a general theory of ethics are at the core of such challenges. (2) The literature is focused on narrative building; modelling or experiments/empirical studies are less illuminating, which causes a shortage of evidence-based knowledge. (3) Knowledge development is skewed towards a few domains, such as healthcare and transport. Academically, the study developed a four-pronged classification of challenges and a four-dimensional set of recommendations covering 18 investigation strands, to steer research that could resolve conflict between different moral principles and build a unified framework for moral decision-making in AI systems.</div></div>","PeriodicalId":48454,"journal":{"name":"Technological Forecasting and Social Change","volume":"217 ","pages":"Article 124150"},"PeriodicalIF":13.3000,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Technological Forecasting and Social Change","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0040162525001817","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

Abstract

The increased reliance on artificial intelligence (AI) systems for decision-making has raised corresponding concerns about the morality of such decisions. However, knowledge on the subject remains fragmentary, and cogent understanding is lacking. This study addresses the gap by using Templier and Paré's (2015) six-step framework to perform a systematic literature review on moral decision-making by AI systems. A data sample of 494 articles was analysed to filter 280 articles for content analysis. Key findings are as follows: (1) Building moral decision-making capabilities in AI systems faces a variety of challenges relating to human decision-making, technology, ethics and values. The absence of consensus on what constitutes moral decision-making and the absence of a general theory of ethics are at the core of such challenges. (2) The literature is focused on narrative building; modelling or experiments/empirical studies are less illuminating, which causes a shortage of evidence-based knowledge. (3) Knowledge development is skewed towards a few domains, such as healthcare and transport. Academically, the study developed a four-pronged classification of challenges and a four-dimensional set of recommendations covering 18 investigation strands, to steer research that could resolve conflict between different moral principles and build a unified framework for moral decision-making in AI systems.
人工智能中的道德决策:全面审查和建议
越来越多地依赖人工智能(AI)系统进行决策,引发了对此类决策道德的相应担忧。然而,关于这一主题的知识仍然是零碎的,缺乏令人信服的理解。本研究通过使用Templier和par(2015)的六步框架来解决这一差距,对人工智能系统的道德决策进行了系统的文献综述。对494篇文章的数据样本进行分析,筛选出280篇文章进行内容分析。主要发现如下:(1)人工智能系统的道德决策能力建设面临着与人类决策、技术、伦理和价值观相关的各种挑战。这些挑战的核心是对什么是道德决策缺乏共识,以及缺乏一般的伦理理论。(2)文学注重叙事建构;建模或实验/实证研究不太具有启发性,这导致缺乏循证知识。(3)知识发展向医疗、交通等少数领域倾斜。在学术上,该研究对挑战进行了四方面分类,并提出了一套涵盖18个调查领域的四维建议,以指导研究解决不同道德原则之间的冲突,并为人工智能系统中的道德决策建立统一的框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
21.30
自引率
10.80%
发文量
813
期刊介绍: Technological Forecasting and Social Change is a prominent platform for individuals engaged in the methodology and application of technological forecasting and future studies as planning tools, exploring the interconnectedness of social, environmental, and technological factors. In addition to serving as a key forum for these discussions, we offer numerous benefits for authors, including complimentary PDFs, a generous copyright policy, exclusive discounts on Elsevier publications, and more.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信