医疗保健中的人工智能:可解释性伦理悖论

Patrick J Seitzinger, J. Kalra
{"title":"医疗保健中的人工智能:可解释性伦理悖论","authors":"Patrick J Seitzinger, J. Kalra","doi":"10.54941/ahfe1003466","DOIUrl":null,"url":null,"abstract":"Explainability is among the most debated and pivotal discussions in the advancement of Artificial Intelligence (AI) technologies across the globe. The development of AI in medicine has reached a tipping point in medicine with implications across all sectors. How we proceed with the issue of explainability will shape the direction and manner in which healthcare evolves. We require new tools that brings us beyond our current levels of medical understanding and capabilities. However, we limit ourselves to tools that we can fully understand and explain. Implementing a tool that cannot be fully understandable by clinicians or patients violates medical ethics of informed consent. Yet, denying patients and the population attainable benefits of a new resource violates medical ethics of justice, health equity and autonomy. Fear of the unknown is not by itself a reason to halt the progression of medicine. Many of our current advancements were implemented prior to fully understanding its intricacies. To convey competence, some subfields of AI research have emphasized validity testing over explainability as a way to verify accuracy and build trust in AI systems. As a tool AI has shown immense potential in idea generation, data analysis, and pattern identification. AI will never be an independent system and will always require human oversight to ensure healthcare quality and ethical implementation. By using AI to augment, rather than replace clinical judgement, the caliber of patient care that we provide can be enhanced in a safe and sustainable manner. Addressing the explainability paradox in AI requires a multidisciplinary approach to address technical, legal, medical, and ethical aspects of this challenge.","PeriodicalId":389399,"journal":{"name":"Healthcare and Medical Devices","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Artificial Intelligence in Healthcare: The Explainability Ethical Paradox\",\"authors\":\"Patrick J Seitzinger, J. Kalra\",\"doi\":\"10.54941/ahfe1003466\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Explainability is among the most debated and pivotal discussions in the advancement of Artificial Intelligence (AI) technologies across the globe. The development of AI in medicine has reached a tipping point in medicine with implications across all sectors. How we proceed with the issue of explainability will shape the direction and manner in which healthcare evolves. We require new tools that brings us beyond our current levels of medical understanding and capabilities. However, we limit ourselves to tools that we can fully understand and explain. Implementing a tool that cannot be fully understandable by clinicians or patients violates medical ethics of informed consent. Yet, denying patients and the population attainable benefits of a new resource violates medical ethics of justice, health equity and autonomy. Fear of the unknown is not by itself a reason to halt the progression of medicine. Many of our current advancements were implemented prior to fully understanding its intricacies. To convey competence, some subfields of AI research have emphasized validity testing over explainability as a way to verify accuracy and build trust in AI systems. As a tool AI has shown immense potential in idea generation, data analysis, and pattern identification. AI will never be an independent system and will always require human oversight to ensure healthcare quality and ethical implementation. By using AI to augment, rather than replace clinical judgement, the caliber of patient care that we provide can be enhanced in a safe and sustainable manner. Addressing the explainability paradox in AI requires a multidisciplinary approach to address technical, legal, medical, and ethical aspects of this challenge.\",\"PeriodicalId\":389399,\"journal\":{\"name\":\"Healthcare and Medical Devices\",\"volume\":\"34 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Healthcare and Medical Devices\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.54941/ahfe1003466\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Healthcare and Medical Devices","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1003466","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

可解释性是全球人工智能(AI)技术进步中最具争议和关键的讨论之一。人工智能在医学领域的发展已经达到了一个临界点,对所有领域都有影响。我们如何处理可解释性问题将决定医疗保健发展的方向和方式。我们需要新的工具,使我们超越目前的医学理解和能力水平。然而,我们将自己限制在我们能够完全理解和解释的工具上。实施一种临床医生或患者无法完全理解的工具违反了知情同意的医学伦理。然而,剥夺患者和民众可获得的新资源利益违反了公正、卫生公平和自主的医学伦理。对未知的恐惧本身并不是阻止医学进步的理由。我们目前的许多进步都是在完全理解其复杂性之前实现的。为了传达能力,人工智能研究的一些子领域强调有效性测试而不是可解释性,以此作为验证人工智能系统准确性和建立信任的一种方式。作为一种工具,人工智能在创意生成、数据分析和模式识别方面显示出了巨大的潜力。人工智能永远不会是一个独立的系统,它总是需要人类的监督,以确保医疗质量和道德的实施。通过使用人工智能来增强而不是取代临床判断,我们可以以安全和可持续的方式提高患者护理的水平。解决人工智能中的可解释性悖论需要采用多学科方法来解决这一挑战的技术、法律、医学和伦理方面的问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Artificial Intelligence in Healthcare: The Explainability Ethical Paradox
Explainability is among the most debated and pivotal discussions in the advancement of Artificial Intelligence (AI) technologies across the globe. The development of AI in medicine has reached a tipping point in medicine with implications across all sectors. How we proceed with the issue of explainability will shape the direction and manner in which healthcare evolves. We require new tools that brings us beyond our current levels of medical understanding and capabilities. However, we limit ourselves to tools that we can fully understand and explain. Implementing a tool that cannot be fully understandable by clinicians or patients violates medical ethics of informed consent. Yet, denying patients and the population attainable benefits of a new resource violates medical ethics of justice, health equity and autonomy. Fear of the unknown is not by itself a reason to halt the progression of medicine. Many of our current advancements were implemented prior to fully understanding its intricacies. To convey competence, some subfields of AI research have emphasized validity testing over explainability as a way to verify accuracy and build trust in AI systems. As a tool AI has shown immense potential in idea generation, data analysis, and pattern identification. AI will never be an independent system and will always require human oversight to ensure healthcare quality and ethical implementation. By using AI to augment, rather than replace clinical judgement, the caliber of patient care that we provide can be enhanced in a safe and sustainable manner. Addressing the explainability paradox in AI requires a multidisciplinary approach to address technical, legal, medical, and ethical aspects of this challenge.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信