Carlos Mario Braga , Manuel A. Serrano , Eduardo Fernández-Medina
{"title":"Towards a methodology for ethical artificial intelligence system development: A necessary trustworthiness taxonomy","authors":"Carlos Mario Braga , Manuel A. Serrano , Eduardo Fernández-Medina","doi":"10.1016/j.eswa.2025.128034","DOIUrl":null,"url":null,"abstract":"<div><div>Recently, generative artificial intelligence (GenAI) has arisen and been rapidly adopted; due to its emergent abilities, there is a significantly increased need for risk management in the implementation of such systems. At the same time, many proposals for translating ethics into AI, as well as the first agreements by regulators governing the use of artificial intelligence (AI), have surfaced. This underscores the need for Trustworthy AI, which implies reliability, compliance, and ethics.</div><div>However, there is still a lack of unified criteria, and more critically, a lack of systematic methodologies for operationalizing trustworthiness within AI development processes. Trustworthiness is crucial, as it ensures that the system performs consistently under expected conditions while adhering to moral and legal standards. The problem of ensuring trustworthiness must be addressed as a preliminary step in creating a methodology for building AI systems with these desirable features. Based on a systematic literature review (SLR), we analyze the ethical, legal, and technological challenges that AI projects face, identifying key considerations and gaps in current approaches. This article presents a detailed and structured sociotechnical taxonomy related to the concept of Trustworthy AI, grounded in the analysis of all relevant texts on the topic, and designed to enable the systematic integration of ethical, legal, and technological principles into AI development processes. The taxonomy establishes a sociotechnical foundation that reflects the interconnected nature of technological, ethical, and legal considerations, and serves as the conceptual basis for CRISP-TAI, a proposed specialized development lifecycle currently under validation, aimed at systematically operationalizing trustworthiness principles across all phases of AI system engineering.</div></div>","PeriodicalId":50461,"journal":{"name":"Expert Systems with Applications","volume":"286 ","pages":"Article 128034"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Expert Systems with Applications","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0957417425016550","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Recently, generative artificial intelligence (GenAI) has arisen and been rapidly adopted; due to its emergent abilities, there is a significantly increased need for risk management in the implementation of such systems. At the same time, many proposals for translating ethics into AI, as well as the first agreements by regulators governing the use of artificial intelligence (AI), have surfaced. This underscores the need for Trustworthy AI, which implies reliability, compliance, and ethics.
However, there is still a lack of unified criteria, and more critically, a lack of systematic methodologies for operationalizing trustworthiness within AI development processes. Trustworthiness is crucial, as it ensures that the system performs consistently under expected conditions while adhering to moral and legal standards. The problem of ensuring trustworthiness must be addressed as a preliminary step in creating a methodology for building AI systems with these desirable features. Based on a systematic literature review (SLR), we analyze the ethical, legal, and technological challenges that AI projects face, identifying key considerations and gaps in current approaches. This article presents a detailed and structured sociotechnical taxonomy related to the concept of Trustworthy AI, grounded in the analysis of all relevant texts on the topic, and designed to enable the systematic integration of ethical, legal, and technological principles into AI development processes. The taxonomy establishes a sociotechnical foundation that reflects the interconnected nature of technological, ethical, and legal considerations, and serves as the conceptual basis for CRISP-TAI, a proposed specialized development lifecycle currently under validation, aimed at systematically operationalizing trustworthiness principles across all phases of AI system engineering.
期刊介绍:
Expert Systems With Applications is an international journal dedicated to the exchange of information on expert and intelligent systems used globally in industry, government, and universities. The journal emphasizes original papers covering the design, development, testing, implementation, and management of these systems, offering practical guidelines. It spans various sectors such as finance, engineering, marketing, law, project management, information management, medicine, and more. The journal also welcomes papers on multi-agent systems, knowledge management, neural networks, knowledge discovery, data mining, and other related areas, excluding applications to military/defense systems.