在医学中实施人工智能的伦理问题

Maxim I. Konkov
{"title":"在医学中实施人工智能的伦理问题","authors":"Maxim I. Konkov","doi":"10.17816/dd430348","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the black box problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a black box, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durn writes, Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a black box of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the black box algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.","PeriodicalId":34831,"journal":{"name":"Digital Diagnostics","volume":" ","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ethical issues of implementing artificial intelligence in medicine\",\"authors\":\"Maxim I. Konkov\",\"doi\":\"10.17816/dd430348\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the black box problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a black box, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durn writes, Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a black box of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the black box algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.\",\"PeriodicalId\":34831,\"journal\":{\"name\":\"Digital Diagnostics\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Digital Diagnostics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.17816/dd430348\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Digital Diagnostics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.17816/dd430348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能(AI)系统效率很高。然而,它们在医疗实践中的实施伴随着一系列伦理问题。黑箱问题是人工智能哲学的基础,尽管它与医学有自己的特殊性。通过PubMed和谷歌Scholar搜索引擎,选取近三年的相关论文,并对其进行分析,研究人工智能在医学领域的应用问题。其中一个核心问题是,医生和患者仍然不清楚证明决策合理性的算法。缺乏清晰合理的人工智能操作原则被称为黑匣子问题。如果没有足够的数据来解释一个特定的决定,医生怎么能依赖人工智能的发现呢?如果出现不利结果(死亡或严重伤害),谁将负责最终决定?在常规实践中,医疗决策是基于综合方法(对病理生理学和生物化学的理解以及对过去发现的解释)、临床试验和队列研究。人工智能可以用来制定疾病诊断和治疗计划,但不能为具体决策提供令人信服的理由。这就产生了一个黑盒,因为AI认为对得出结论很重要的信息并不总是很清楚,也不清楚AI如何或为什么得出结论。因此,Juan M. Durn写道,即使我们声称理解了人工智能注释和训练的基本原理,要理解这些系统的内部工作原理仍然很困难,甚至常常是不可能的。医生可以解释或验证这些算法的结果,但无法解释算法是如何得出建议或诊断的。目前,人工智能模型被训练来识别结肠中的微观腺瘤和息肉。然而,尽管人工智能具有很高的准确性,但医生对人工智能如何区分不同类型的息肉仍然了解不足,而且对于经验丰富的内窥镜医生来说,人工智能诊断的关键迹象仍然不清楚。另一个例子是人工智能识别的结直肠癌生物标志物。医生不知道算法如何确定可检测生物标志物的定量和定性标准,从而在每个病例中制定最终诊断,即过程病理学的黑箱出现。为了赢得医生和病人的信任,必须破译和解释人工智能工作的基本过程,描述它是如何循序渐进地完成的,并制定一个具体的结果。虽然黑箱算法不能称为透明,但在实际医学中应用这些技术的可能性值得考虑。尽管存在上述问题,但解决方案的准确性和效率不容忽视人工智能的使用。相反,这种用法是必要的。新出现的问题应作为培训和教育医生与人工智能合作、扩大应用范围和开发新的诊断技术的基础。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Ethical issues of implementing artificial intelligence in medicine
Artificial intelligence (AI) systems are highly efficient. However, their implementation in medical practice is accompanied by a range of ethical issues. The black box problem is basic to the AI philosophy, although having its own specificity in relation to medicine. A selection of relevant papers for the last three years by citations and their analysis through PubMed and Google Scholar search engines was conducted to study the problems of the AI implementation in medicine. One of the central problems is that the algorithms to justify decisions are still unclear to doctors and patients. The lack of clear and reasonable principles of AI operation is called the black box problem. How can doctors rely on AI findings without enough data to explain a particular decision? Who will be responsible for the final decision in case of an adverse outcome (death or serious injury)? In routine practice, medical decisions are based on an integrative approach (understanding of pathophysiology and biochemistry and interpretation of past findings), clinical trials and cohort studies. AI may be used to build a plan for disease diagnosis and treatment, while not providing a convincing justification for specific decisions. This creates a black box, since the information that the AI considers important for making a conclusion is not always clear, nor is it clear how or why the AI reaches that conclusion. Thus, Juan M. Durn writes, Even if we claim to understand the principles underlying AI annotation and training, it is still difficult and often even impossible to understand the inner workings of such systems. The doctor can interpret or verify the results of these algorithms, but cannot explain how the algorithm arrived at its recommendations or diagnosis. Currently, AI models are trained to recognize microscopic adenomas and polyps in the colon. However, doctors still have insufficient understanding of how AI differentiates between different types of polyps despite the high accuracy, and the signs that are key to making an AI diagnosis remain unclear to experienced endoscopists. Another example is the biomarkers of colorectal cancer recognized by AI. The doctor does not know how algorithms determine the quantitative and qualitative criteria of detectable biomarkers to formulate a final diagnosis in each individual case, i.e., a black box of process pathology emerges. For the trust of doctors and patients to be earned, the processes underlying the work of AI must be deciphered and explained, describing how it is done sequentially, step by step, and a specific result is to be formulated. Although the black box algorithms cannot be called transparent, the possibility of applying these technologies in practical medicine is worth considering. Despite the above problems, the accuracy and efficiency of solutions does not allow to neglect the use of AI. On the contrary, this use is necessary. Emerging problems should serve as a basis for training and educating doctors to work with AI, expanding the scope of application and developing new diagnostic techniques.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
1.30
自引率
0.00%
发文量
44
审稿时长
5 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信