将可解释性作为人工智能医学伦理的原则进行辩护。

IF 2.3 2区 哲学 Q1 ETHICS
Medicine Health Care and Philosophy Pub Date : 2023-12-01 Epub Date: 2023-08-29 DOI:10.1007/s11019-023-10175-7
Jonathan Adams
{"title":"将可解释性作为人工智能医学伦理的原则进行辩护。","authors":"Jonathan Adams","doi":"10.1007/s11019-023-10175-7","DOIUrl":null,"url":null,"abstract":"<p><p>The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of 'principlism'. It specifically responds to a recent set of criticisms that challenge the supposed need for such a principle to perform an enabling role in relation to the traditional four principles and therefore suggest that these four are sufficient without the addition of explicability. The paper challenges the critics' premise that explicability cannot be an ethical principle like the classic four because it is explicitly subordinate to them. It argues instead that principlism in its original formulation locates the justification for ethical principles in a midlevel position such that they mediate between the most general moral norms and the contextual requirements of medicine. This conception of an ethical principle then provides a mold for an approach to explicability on which it functions as an enabling principle that unifies technical/epistemic demands on AI and the requirements of high-level ethical theories. The paper finishes by anticipating an objection that decision-making by clinicians and AI fall equally, but implausibly, under the principle of explicability's scope, which it rejects on the grounds that human decisions, unlike AI's, can be explained by their social environments.</p>","PeriodicalId":47449,"journal":{"name":"Medicine Health Care and Philosophy","volume":" ","pages":"615-623"},"PeriodicalIF":2.3000,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10725847/pdf/","citationCount":"0","resultStr":"{\"title\":\"Defending explicability as a principle for the ethics of artificial intelligence in medicine.\",\"authors\":\"Jonathan Adams\",\"doi\":\"10.1007/s11019-023-10175-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of 'principlism'. It specifically responds to a recent set of criticisms that challenge the supposed need for such a principle to perform an enabling role in relation to the traditional four principles and therefore suggest that these four are sufficient without the addition of explicability. The paper challenges the critics' premise that explicability cannot be an ethical principle like the classic four because it is explicitly subordinate to them. It argues instead that principlism in its original formulation locates the justification for ethical principles in a midlevel position such that they mediate between the most general moral norms and the contextual requirements of medicine. This conception of an ethical principle then provides a mold for an approach to explicability on which it functions as an enabling principle that unifies technical/epistemic demands on AI and the requirements of high-level ethical theories. The paper finishes by anticipating an objection that decision-making by clinicians and AI fall equally, but implausibly, under the principle of explicability's scope, which it rejects on the grounds that human decisions, unlike AI's, can be explained by their social environments.</p>\",\"PeriodicalId\":47449,\"journal\":{\"name\":\"Medicine Health Care and Philosophy\",\"volume\":\" \",\"pages\":\"615-623\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2023-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10725847/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Medicine Health Care and Philosophy\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1007/s11019-023-10175-7\",\"RegionNum\":2,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2023/8/29 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Medicine Health Care and Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1007/s11019-023-10175-7","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/8/29 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

难以解释人工智能(AI)模型的输出结果以及导致这些结果的原因是一个众所周知的伦理问题,无论这些技术应用于何处,包括医疗领域,这个问题都没有明显的解决方案。本文探讨了卢西亚诺-弗洛里迪及其同事提出的建议,即在构成 "原则主义 "理论的传统生物伦理学四项原则之外,加入一项新的 "可解释性原则"。这些批评意见认为,在传统的四项原则之外,还需要这样一项原则来发挥促进作用,因此认为不增加可解释性原则,这四项原则就已经足够了。本文对批评者的前提提出质疑,即可解释性不能成为与传统四项原则一样的伦理原则,因为它明确从属于传统四项原则。相反,论文认为,原则主义在其最初的表述中将伦理原则的正当性置于中层位置,使其介于最一般的道德规范与医学的背景要求之间。伦理原则的这一概念为可解释性方法提供了一个模子,在此模子上,伦理原则作为一个使能原则,将对人工智能的技术/学术要求与高层次伦理理论的要求统一起来。最后,本文预测了一种反对意见,即临床医生和人工智能的决策同样属于可解释性原则的范围,但这种反对意见是难以置信的,本文拒绝接受这种反对意见,理由是人类的决策与人工智能的决策不同,可以由其社会环境来解释。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Defending explicability as a principle for the ethics of artificial intelligence in medicine.

The difficulty of explaining the outputs of artificial intelligence (AI) models and what has led to them is a notorious ethical problem wherever these technologies are applied, including in the medical domain, and one that has no obvious solution. This paper examines the proposal, made by Luciano Floridi and colleagues, to include a new 'principle of explicability' alongside the traditional four principles of bioethics that make up the theory of 'principlism'. It specifically responds to a recent set of criticisms that challenge the supposed need for such a principle to perform an enabling role in relation to the traditional four principles and therefore suggest that these four are sufficient without the addition of explicability. The paper challenges the critics' premise that explicability cannot be an ethical principle like the classic four because it is explicitly subordinate to them. It argues instead that principlism in its original formulation locates the justification for ethical principles in a midlevel position such that they mediate between the most general moral norms and the contextual requirements of medicine. This conception of an ethical principle then provides a mold for an approach to explicability on which it functions as an enabling principle that unifies technical/epistemic demands on AI and the requirements of high-level ethical theories. The paper finishes by anticipating an objection that decision-making by clinicians and AI fall equally, but implausibly, under the principle of explicability's scope, which it rejects on the grounds that human decisions, unlike AI's, can be explained by their social environments.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.30
自引率
4.80%
发文量
64
期刊介绍: Medicine, Health Care and Philosophy: A European Journal is the official journal of the European Society for Philosophy of Medicine and Health Care. It provides a forum for international exchange of research data, theories, reports and opinions in bioethics and philosophy of medicine. The journal promotes interdisciplinary studies, and stimulates philosophical analysis centered on a common object of reflection: health care, the human effort to deal with disease, illness, death as well as health, well-being and life. Particular attention is paid to developing contributions from all European countries, and to making accessible scientific work and reports on the practice of health care ethics, from all nations, cultures and language areas in Europe.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信