{"title":"Reflection on the equitable attribution of responsibility for artificial intelligence-assisted diagnosis and treatment decisions","authors":"Antian Chen , Chenyu Wang , Xinqing Zhang","doi":"10.1016/j.imed.2022.04.002","DOIUrl":null,"url":null,"abstract":"<div><p>Artificial intelligence (AI) is developing rapidly and is being used in several medical capacities, including assisting in diagnosis and treatment decisions. As a result, this raises the conceptual and practical problem of how to distribute responsibility when AI-assisted diagnosis and treatment have been used and patients are harmed in the process. Regulations on this issue have not yet been established. It would be beneficial to tackle responsibility attribution prior to the development of biomedical AI technologies and ethical guidelines.</p><p>In general, human doctors acting as superiors need to bear responsibility for their clinical decisions. However, human doctors should not bear responsibility for the behavior of an AI doctor that is practicing medicine independently. According to the degree of fault—which includes internal institutional ethics, the AI bidding process in procurement, and the medical process—clinical institutions are required to bear corresponding responsibility. AI manufacturers are responsible for creating accurate algorithms, network security, and insuring patient privacy protection. However, the AI itself should not be subjected to legal evaluation since there is no need for it to bear responsibility. Corresponding responsibility should be borne by the employer, in this case the medical institution.</p></div>","PeriodicalId":73400,"journal":{"name":"Intelligent medicine","volume":"3 2","pages":"Pages 139-143"},"PeriodicalIF":4.4000,"publicationDate":"2023-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Intelligent medicine","FirstCategoryId":"3","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667102622000353","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) is developing rapidly and is being used in several medical capacities, including assisting in diagnosis and treatment decisions. As a result, this raises the conceptual and practical problem of how to distribute responsibility when AI-assisted diagnosis and treatment have been used and patients are harmed in the process. Regulations on this issue have not yet been established. It would be beneficial to tackle responsibility attribution prior to the development of biomedical AI technologies and ethical guidelines.
In general, human doctors acting as superiors need to bear responsibility for their clinical decisions. However, human doctors should not bear responsibility for the behavior of an AI doctor that is practicing medicine independently. According to the degree of fault—which includes internal institutional ethics, the AI bidding process in procurement, and the medical process—clinical institutions are required to bear corresponding responsibility. AI manufacturers are responsible for creating accurate algorithms, network security, and insuring patient privacy protection. However, the AI itself should not be subjected to legal evaluation since there is no need for it to bear responsibility. Corresponding responsibility should be borne by the employer, in this case the medical institution.