Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare

IF 4.8 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
R. Procter, P. Tolmie, M. Rouncefield
{"title":"Holding AI to Account: Challenges for the Delivery of Trustworthy AI in Healthcare","authors":"R. Procter, P. Tolmie, M. Rouncefield","doi":"10.1145/3577009","DOIUrl":null,"url":null,"abstract":"The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this article, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human–Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.","PeriodicalId":50917,"journal":{"name":"ACM Transactions on Computer-Human Interaction","volume":"30 1","pages":"1 - 34"},"PeriodicalIF":4.8000,"publicationDate":"2022-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on Computer-Human Interaction","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1145/3577009","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 6

Abstract

The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this article, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human–Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.
追究人工智能的责任:在医疗保健领域交付值得信赖的人工智能面临的挑战
人工智能系统为其行为提供解释的必要性现在被广泛认为是采用人工智能的关键。在这篇文章中,我们研究了值得信赖的人工智能的问题,并探讨了在实践中实现这一点意味着什么,重点是医疗保健应用。这一领域的工作通常将值得信赖的人工智能视为涉及个人用户和人工智能系统的人机交互问题。然而,我们在这里认为,这忽略了组织问责制在人们如何在社会技术环境中思考和信任人工智能方面发挥的重要作用。为了说明组织问责制的重要性,我们在多学科团队会议上展示了乳腺癌症筛查和癌症治疗规划的人种学研究结果,以显示参与者如何让自己对彼此和他们所属的组织负责。我们利用这些发现丰富了对值得信赖的人工智能要求的现有理解,并概述了让人工智能对个人用户和组织负责的一些候选解决方案。最后,我们概述了这对未来开发值得信赖的人工智能的影响,包括我们提出的解决方案在不同应用环境中重复使用的方式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
ACM Transactions on Computer-Human Interaction
ACM Transactions on Computer-Human Interaction 工程技术-计算机:控制论
CiteScore
8.50
自引率
5.40%
发文量
94
审稿时长
>12 weeks
期刊介绍: This ACM Transaction seeks to be the premier archival journal in the multidisciplinary field of human-computer interaction. Since its first issue in March 1994, it has presented work of the highest scientific quality that contributes to the practice in the present and future. The primary emphasis is on results of broad application, but the journal considers original work focused on specific domains, on special requirements, on ethical issues -- the full range of design, development, and use of interactive systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信