The role of patient outcomes in shaping moral responsibility in AI-supported decision making

IF 2.5 Q2 RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING
C. Edwards , A. Murphy , A. Singh , S. Daniel , C. Chamunyonga
{"title":"The role of patient outcomes in shaping moral responsibility in AI-supported decision making","authors":"C. Edwards ,&nbsp;A. Murphy ,&nbsp;A. Singh ,&nbsp;S. Daniel ,&nbsp;C. Chamunyonga","doi":"10.1016/j.radi.2025.102948","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>Integrating decision support mechanisms utilising artificial intelligence (AI) into medical radiation practice introduces unique challenges to accountability for patient care outcomes. AI systems, often seen as “black boxes,” can obscure decision-making processes, raising concerns about practitioner responsibility, especially in adverse outcomes. This study examines how medical radiation practitioners perceive and attribute moral responsibility when interacting with AI-assisted decision-making tools.</div></div><div><h3>Methods</h3><div>A cross-sectional online survey was conducted from September to December 2024, targeting international medical radiation practitioners. Participants were randomly assigned one of four profession-specific scenarios involving AI recommendations and patient outcomes. A 5-point Likert scale assessed the practitioner's perceptions of moral responsibility, and the responses were analysed using descriptive statistics, Kruskal–Wallis tests, and ordinal regression. Demographic and contextual factors were also evaluated.</div></div><div><h3>Results</h3><div>649 radiographers, radiation therapists, nuclear medicine scientists, and sonographers provided complete responses. Most participants (49.8 %) had experience using AI in their current roles. Practitioners assigned higher moral responsibility to themselves in positive patient outcomes compared to negative ones (χ<sup>2</sup>(1) = 18.98, p &lt; 0.001). Prior knowledge of AI ethics and professional discipline significantly influenced responsibility ratings. While practitioners generally accepted responsibility, 33 % also attributed shared responsibility to AI developers and institutions.</div></div><div><h3>Conclusion</h3><div>Patient outcomes significantly influence perceptions of moral responsibility, with a shift toward shared accountability in adverse scenarios. Prior knowledge of AI ethics is crucial in shaping these perceptions, highlighting the need for targeted education.</div></div><div><h3>Implications for practice</h3><div>Understanding practitioner perceptions of accountability is critical for developing ethical frameworks, training programs, and shared responsibility models that ensure the safe integration of AI into clinical practice. Robust regulatory structures are necessary to address the unique challenges of AI-assisted decision-making.</div></div>","PeriodicalId":47416,"journal":{"name":"Radiography","volume":"31 3","pages":"Article 102948"},"PeriodicalIF":2.5000,"publicationDate":"2025-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiography","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1078817425000926","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

Integrating decision support mechanisms utilising artificial intelligence (AI) into medical radiation practice introduces unique challenges to accountability for patient care outcomes. AI systems, often seen as “black boxes,” can obscure decision-making processes, raising concerns about practitioner responsibility, especially in adverse outcomes. This study examines how medical radiation practitioners perceive and attribute moral responsibility when interacting with AI-assisted decision-making tools.

Methods

A cross-sectional online survey was conducted from September to December 2024, targeting international medical radiation practitioners. Participants were randomly assigned one of four profession-specific scenarios involving AI recommendations and patient outcomes. A 5-point Likert scale assessed the practitioner's perceptions of moral responsibility, and the responses were analysed using descriptive statistics, Kruskal–Wallis tests, and ordinal regression. Demographic and contextual factors were also evaluated.

Results

649 radiographers, radiation therapists, nuclear medicine scientists, and sonographers provided complete responses. Most participants (49.8 %) had experience using AI in their current roles. Practitioners assigned higher moral responsibility to themselves in positive patient outcomes compared to negative ones (χ2(1) = 18.98, p < 0.001). Prior knowledge of AI ethics and professional discipline significantly influenced responsibility ratings. While practitioners generally accepted responsibility, 33 % also attributed shared responsibility to AI developers and institutions.

Conclusion

Patient outcomes significantly influence perceptions of moral responsibility, with a shift toward shared accountability in adverse scenarios. Prior knowledge of AI ethics is crucial in shaping these perceptions, highlighting the need for targeted education.

Implications for practice

Understanding practitioner perceptions of accountability is critical for developing ethical frameworks, training programs, and shared responsibility models that ensure the safe integration of AI into clinical practice. Robust regulatory structures are necessary to address the unique challenges of AI-assisted decision-making.
在人工智能辅助决策中,患者结果在塑造道德责任方面的作用
利用人工智能(AI)将决策支持机制整合到医疗辐射实践中,对患者护理结果的问责制提出了独特的挑战。人工智能系统通常被视为“黑盒子”,可能会模糊决策过程,引发人们对从业者责任的担忧,尤其是在不良后果方面。本研究考察了医疗放射从业者在与人工智能辅助决策工具互动时如何感知和归因道德责任。方法于2024年9月至12月对国际医疗放射从业人员进行横断面在线调查。参与者被随机分配到涉及人工智能建议和患者结果的四种专业特定场景中的一种。5分李克特量表评估从业者对道德责任的看法,并使用描述性统计,Kruskal-Wallis测试和有序回归分析反应。人口统计和环境因素也进行了评估。结果649名放射技师、放射治疗师、核医学科学家和超声技师提供了完整的反馈。大多数参与者(49.8%)在目前的职位上有使用人工智能的经验。与患者预后不良的情况相比,在患者预后良好的情况下,从业人员对自己负有更高的道德责任(χ2(1) = 18.98, p <;0.001)。人工智能伦理和专业纪律的先验知识显著影响责任评级。虽然从业者普遍承担责任,但33%的人认为人工智能开发者和机构也应承担责任。结论:患者预后显著影响道德责任观念,在不良情况下向共同责任转变。事先了解人工智能伦理对于形成这些观念至关重要,这凸显了有针对性教育的必要性。对实践的影响了解从业者对责任的看法对于制定道德框架、培训计划和共同责任模型至关重要,这些模型可以确保人工智能安全融入临床实践。健全的监管结构对于应对人工智能辅助决策的独特挑战是必要的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Radiography
Radiography RADIOLOGY, NUCLEAR MEDICINE & MEDICAL IMAGING-
CiteScore
4.70
自引率
34.60%
发文量
169
审稿时长
63 days
期刊介绍: Radiography is an International, English language, peer-reviewed journal of diagnostic imaging and radiation therapy. Radiography is the official professional journal of the College of Radiographers and is published quarterly. Radiography aims to publish the highest quality material, both clinical and scientific, on all aspects of diagnostic imaging and radiation therapy and oncology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信