Towards a methodology for ethical artificial intelligence system development: A necessary trustworthiness taxonomy

IF 7.5 1区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Carlos Mario Braga , Manuel A. Serrano , Eduardo Fernández-Medina
{"title":"Towards a methodology for ethical artificial intelligence system development: A necessary trustworthiness taxonomy","authors":"Carlos Mario Braga ,&nbsp;Manuel A. Serrano ,&nbsp;Eduardo Fernández-Medina","doi":"10.1016/j.eswa.2025.128034","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, generative artificial intelligence (GenAI) has arisen and been rapidly adopted; due to its emergent abilities, there is a significantly increased need for risk management in the implementation of such systems. At the same time, many proposals for translating ethics into AI, as well as the first agreements by regulators governing the use of artificial intelligence (AI), have surfaced. This underscores the need for Trustworthy AI, which implies reliability, compliance, and ethics.</div><div>However, there is still a lack of unified criteria, and more critically, a lack of systematic methodologies for operationalizing trustworthiness within AI development processes. Trustworthiness is crucial, as it ensures that the system performs consistently under expected conditions while adhering to moral and legal standards. The problem of ensuring trustworthiness must be addressed as a preliminary step in creating a methodology for building AI systems with these desirable features. Based on a systematic literature review (SLR), we analyze the ethical, legal, and technological challenges that AI projects face, identifying key considerations and gaps in current approaches. This article presents a detailed and structured sociotechnical taxonomy related to the concept of Trustworthy AI, grounded in the analysis of all relevant texts on the topic, and designed to enable the systematic integration of ethical, legal, and technological principles into AI development processes. The taxonomy establishes a sociotechnical foundation that reflects the interconnected nature of technological, ethical, and legal considerations, and serves as the conceptual basis for CRISP-TAI, a proposed specialized development lifecycle currently under validation, aimed at systematically operationalizing trustworthiness principles across all phases of AI system engineering.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"286 ","pages":"Article 128034"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425016550","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Recently, generative artificial intelligence (GenAI) has arisen and been rapidly adopted; due to its emergent abilities, there is a significantly increased need for risk management in the implementation of such systems. At the same time, many proposals for translating ethics into AI, as well as the first agreements by regulators governing the use of artificial intelligence (AI), have surfaced. This underscores the need for Trustworthy AI, which implies reliability, compliance, and ethics.
However, there is still a lack of unified criteria, and more critically, a lack of systematic methodologies for operationalizing trustworthiness within AI development processes. Trustworthiness is crucial, as it ensures that the system performs consistently under expected conditions while adhering to moral and legal standards. The problem of ensuring trustworthiness must be addressed as a preliminary step in creating a methodology for building AI systems with these desirable features. Based on a systematic literature review (SLR), we analyze the ethical, legal, and technological challenges that AI projects face, identifying key considerations and gaps in current approaches. This article presents a detailed and structured sociotechnical taxonomy related to the concept of Trustworthy AI, grounded in the analysis of all relevant texts on the topic, and designed to enable the systematic integration of ethical, legal, and technological principles into AI development processes. The taxonomy establishes a sociotechnical foundation that reflects the interconnected nature of technological, ethical, and legal considerations, and serves as the conceptual basis for CRISP-TAI, a proposed specialized development lifecycle currently under validation, aimed at systematically operationalizing trustworthiness principles across all phases of AI system engineering.
面向伦理人工智能系统开发的方法论:必要的可信度分类法
近年来,生成式人工智能(GenAI)已经兴起并被迅速采用;由于其应急能力,在实施此类系统时对风险管理的需求显著增加。与此同时,许多将伦理转化为人工智能的建议,以及监管机构管理人工智能(AI)使用的首批协议已经浮出水面。这强调了对值得信赖的人工智能的需求,这意味着可靠性、合规性和道德。然而,仍然缺乏统一的标准,更关键的是,缺乏在人工智能开发过程中实施可信度的系统方法。诚信是至关重要的,因为它确保系统在遵守道德和法律标准的同时,在预期的条件下始终如一地运行。确保可信度的问题必须作为创建具有这些理想功能的人工智能系统的方法的初步步骤来解决。基于系统的文献综述(SLR),我们分析了人工智能项目面临的伦理、法律和技术挑战,确定了当前方法中的关键考虑因素和差距。本文提出了一个与“值得信赖的人工智能”概念相关的详细和结构化的社会技术分类法,该分类法基于对该主题的所有相关文本的分析,旨在将伦理、法律和技术原则系统地整合到人工智能开发过程中。该分类法建立了一个社会技术基础,反映了技术、伦理和法律考虑的相互联系的本质,并作为CRISP-TAI的概念基础,CRISP-TAI是一个目前正在验证的拟议的专业开发生命周期,旨在系统地在人工智能系统工程的所有阶段实施可信度原则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Expert Systems with Applications
Expert Systems with Applications 工程技术-工程:电子与电气
CiteScore
13.80
自引率
10.60%
发文量
2045
审稿时长
8.7 months
期刊介绍: Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信