Reflection on the equitable attribution of responsibility for artificial intelligence-assisted diagnosis and treatment decisions

IF 4.4 Q1 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Antian Chen , Chenyu Wang , Xinqing Zhang
{"title":"Reflection on the equitable attribution of responsibility for artificial intelligence-assisted diagnosis and treatment decisions","authors":"Antian Chen ,&nbsp;Chenyu Wang ,&nbsp;Xinqing Zhang","doi":"10.1016/j.imed.2022.04.002","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligence (AI) is developing rapidly and is being used in several medical capacities, including assisting in diagnosis and treatment decisions. As a result, this raises the conceptual and practical problem of how to distribute responsibility when AI-assisted diagnosis and treatment have been used and patients are harmed in the process. Regulations on this issue have not yet been established. It would be beneficial to tackle responsibility attribution prior to the development of biomedical AI technologies and ethical guidelines.</p><p>In general, human doctors acting as superiors need to bear responsibility for their clinical decisions. However, human doctors should not bear responsibility for the behavior of an AI doctor that is practicing medicine independently. According to the degree of fault—which includes internal institutional ethics, the AI bidding process in procurement, and the medical process—clinical institutions are required to bear corresponding responsibility. AI manufacturers are responsible for creating accurate algorithms, network security, and insuring patient privacy protection. However, the AI itself should not be subjected to legal evaluation since there is no need for it to bear responsibility. Corresponding responsibility should be borne by the employer, in this case the medical institution.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 2","pages":"Pages 139-143"},"PeriodicalIF":4.4000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102622000353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial intelligence (AI) is developing rapidly and is being used in several medical capacities, including assisting in diagnosis and treatment decisions. As a result, this raises the conceptual and practical problem of how to distribute responsibility when AI-assisted diagnosis and treatment have been used and patients are harmed in the process. Regulations on this issue have not yet been established. It would be beneficial to tackle responsibility attribution prior to the development of biomedical AI technologies and ethical guidelines.

In general, human doctors acting as superiors need to bear responsibility for their clinical decisions. However, human doctors should not bear responsibility for the behavior of an AI doctor that is practicing medicine independently. According to the degree of fault—which includes internal institutional ethics, the AI bidding process in procurement, and the medical process—clinical institutions are required to bear corresponding responsibility. AI manufacturers are responsible for creating accurate algorithms, network security, and insuring patient privacy protection. However, the AI itself should not be subjected to legal evaluation since there is no need for it to bear responsibility. Corresponding responsibility should be borne by the employer, in this case the medical institution.

关于人工智能辅助诊疗决策责任公平归属的思考
人工智能正在迅速发展,并被用于多种医疗能力,包括协助诊断和治疗决策。因此,这就提出了一个概念和实践问题,即当人工智能辅助诊断和治疗被使用并且患者在这个过程中受到伤害时,如何分配责任。关于这一问题的条例尚未制定。在开发生物医学人工智能技术和道德准则之前,解决责任归属问题将是有益的。一般来说,作为上级的人类医生需要对他们的临床决策承担责任。然而,人类医生不应该为独立行医的人工智能医生的行为承担责任。根据过错程度——包括内部机构道德、采购中的人工智能招标过程和医疗过程——临床机构需要承担相应的责任。人工智能制造商负责创建准确的算法、网络安全和确保患者隐私保护。然而,人工智能本身不应受到法律评估,因为它没有必要承担责任。相应的责任应由雇主承担,在这种情况下是医疗机构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Intelligent medicine
Intelligent medicine Surgery, Radiology and Imaging, Artificial Intelligence, Biomedical Engineering
CiteScore
5.20
自引率
0.00%
发文量
19
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信