Christopher D. Wirz , Julie L. Demuth , Ann Bostrom , Mariana G. Cains , Imme Ebert-Uphoff , David John Gagne II , Andrea Schumacher , Amy McGovern , Deianna Madlambayan
{"title":"(Re)Conceptualizing trustworthy AI: A foundation for change","authors":"Christopher D. Wirz , Julie L. Demuth , Ann Bostrom , Mariana G. Cains , Imme Ebert-Uphoff , David John Gagne II , Andrea Schumacher , Amy McGovern , Deianna Madlambayan","doi":"10.1016/j.artint.2025.104309","DOIUrl":null,"url":null,"abstract":"<div><div>Developers and academics have grown increasingly interested in developing “trustworthy” artificial intelligence (AI). However, this aim is difficult to achieve in practice, especially given trust and trustworthiness are complex, multifaceted concepts that cannot be completely guaranteed nor built entirely into an AI system. We have drawn on the breadth of trust-related literature across multiple disciplines and fields to synthesize knowledge pertaining to interpersonal trust, trust in automation, and risk and trust. Based on this review we have (re)conceptualized trustworthiness in practice as being both (a) perceptual, meaning that a user assesses whether, when, and to what extent AI model output is trustworthy, even if it has been developed in adherence to AI trustworthiness standards, and (b) context-dependent, meaning that a user's perceived trustworthiness and use of an AI model can vary based on the specifics of their situation (e.g., time-pressures for decision-making, high-stakes decisions). We provide our reconceptualization to nuance how trustworthiness is thought about, studied, and evaluated by the AI community in ways that are more aligned with past theoretical research.</div></div>","PeriodicalId":8434,"journal":{"name":"Artificial Intelligence","volume":"342 ","pages":"Article 104309"},"PeriodicalIF":5.1000,"publicationDate":"2025-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0004370225000281","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Developers and academics have grown increasingly interested in developing “trustworthy” artificial intelligence (AI). However, this aim is difficult to achieve in practice, especially given trust and trustworthiness are complex, multifaceted concepts that cannot be completely guaranteed nor built entirely into an AI system. We have drawn on the breadth of trust-related literature across multiple disciplines and fields to synthesize knowledge pertaining to interpersonal trust, trust in automation, and risk and trust. Based on this review we have (re)conceptualized trustworthiness in practice as being both (a) perceptual, meaning that a user assesses whether, when, and to what extent AI model output is trustworthy, even if it has been developed in adherence to AI trustworthiness standards, and (b) context-dependent, meaning that a user's perceived trustworthiness and use of an AI model can vary based on the specifics of their situation (e.g., time-pressures for decision-making, high-stakes decisions). We provide our reconceptualization to nuance how trustworthiness is thought about, studied, and evaluated by the AI community in ways that are more aligned with past theoretical research.
期刊介绍:
The Journal of Artificial Intelligence (AIJ) welcomes papers covering a broad spectrum of AI topics, including cognition, automated reasoning, computer vision, machine learning, and more. Papers should demonstrate advancements in AI and propose innovative approaches to AI problems. Additionally, the journal accepts papers describing AI applications, focusing on how new methods enhance performance rather than reiterating conventional approaches. In addition to regular papers, AIJ also accepts Research Notes, Research Field Reviews, Position Papers, Book Reviews, and summary papers on AI challenges and competitions.