利用法律和道德促进医疗保健中安全可靠的AI/ML

Katherine Drabiak
{"title":"利用法律和道德促进医疗保健中安全可靠的AI/ML","authors":"Katherine Drabiak","doi":"10.3389/fnume.2022.983340","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence and machine learning (AI/ML) is poised to disrupt the structure and delivery of healthcare, promising to optimize care clinical care delivery and information management. AI/ML offers potential benefits in healthcare, such as creating novel clinical decision support tools, pattern recognition software, and predictive modeling systems. This raises questions about how AI/ML will impact the physician-patient relationship and the practice of medicine. Effective utilization and reliance on AI/ML also requires that these technologies are safe and reliable. Potential errors could not only pose serious risks to patient safety, but also expose physicians, hospitals, and AI/ML manufacturers to liability. This review describes how the law provides a mechanism to promote safety and reliability of AI/ML systems. On the front end, the Food and Drug Administration (FDA) intends to regulate many AI/ML as medical devices, which corresponds to a set of regulatory requirements prior to product marketing and use. Post-development, a variety of mechanisms in the law provide guardrails for careful deployment into clinical practice that can also incentivize product improvement. This review provides an overview of potential areas of liability arising from AI/ML including malpractice, informed consent, corporate liability, and products liability. Finally, this review summarizes strategies to minimize risk and promote safe and reliable AI/ML.</p>","PeriodicalId":73095,"journal":{"name":"Frontiers in nuclear medicine (Lausanne, Switzerland)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11440832/pdf/","citationCount":"0","resultStr":"{\"title\":\"Leveraging law and ethics to promote safe and reliable AI/ML in healthcare.\",\"authors\":\"Katherine Drabiak\",\"doi\":\"10.3389/fnume.2022.983340\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Artificial intelligence and machine learning (AI/ML) is poised to disrupt the structure and delivery of healthcare, promising to optimize care clinical care delivery and information management. AI/ML offers potential benefits in healthcare, such as creating novel clinical decision support tools, pattern recognition software, and predictive modeling systems. This raises questions about how AI/ML will impact the physician-patient relationship and the practice of medicine. Effective utilization and reliance on AI/ML also requires that these technologies are safe and reliable. Potential errors could not only pose serious risks to patient safety, but also expose physicians, hospitals, and AI/ML manufacturers to liability. This review describes how the law provides a mechanism to promote safety and reliability of AI/ML systems. On the front end, the Food and Drug Administration (FDA) intends to regulate many AI/ML as medical devices, which corresponds to a set of regulatory requirements prior to product marketing and use. Post-development, a variety of mechanisms in the law provide guardrails for careful deployment into clinical practice that can also incentivize product improvement. This review provides an overview of potential areas of liability arising from AI/ML including malpractice, informed consent, corporate liability, and products liability. Finally, this review summarizes strategies to minimize risk and promote safe and reliable AI/ML.</p>\",\"PeriodicalId\":73095,\"journal\":{\"name\":\"Frontiers in nuclear medicine (Lausanne, Switzerland)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-09-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11440832/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in nuclear medicine (Lausanne, Switzerland)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fnume.2022.983340\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in nuclear medicine (Lausanne, Switzerland)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fnume.2022.983340","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

人工智能和机器学习(AI/ML)有望颠覆医疗保健的结构和提供,有望优化护理临床护理提供和信息管理。AI/ML在医疗保健领域提供了潜在的好处,例如创建新的临床决策支持工具、模式识别软件和预测建模系统。这引发了关于AI/ML将如何影响医患关系和医学实践的问题。有效利用和依赖AI/ML还要求这些技术安全可靠。潜在的错误不仅会对患者安全构成严重风险,还会使医生、医院和人工智能/ML制造商承担责任。这篇综述描述了法律如何提供一种机制来提高AI/ML系统的安全性和可靠性。在前端,美国食品药品监督管理局(FDA)打算将许多AI/ML作为医疗器械进行监管,这与产品营销和使用前的一系列监管要求相对应。开发后,法律中的各种机制为临床实践的谨慎部署提供了保障,也可以激励产品改进。本综述概述了AI/ML引起的潜在责任领域,包括渎职、知情同意、公司责任和产品责任。最后,本综述总结了将风险降至最低并促进安全可靠的AI/ML的策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Leveraging law and ethics to promote safe and reliable AI/ML in healthcare.

Artificial intelligence and machine learning (AI/ML) is poised to disrupt the structure and delivery of healthcare, promising to optimize care clinical care delivery and information management. AI/ML offers potential benefits in healthcare, such as creating novel clinical decision support tools, pattern recognition software, and predictive modeling systems. This raises questions about how AI/ML will impact the physician-patient relationship and the practice of medicine. Effective utilization and reliance on AI/ML also requires that these technologies are safe and reliable. Potential errors could not only pose serious risks to patient safety, but also expose physicians, hospitals, and AI/ML manufacturers to liability. This review describes how the law provides a mechanism to promote safety and reliability of AI/ML systems. On the front end, the Food and Drug Administration (FDA) intends to regulate many AI/ML as medical devices, which corresponds to a set of regulatory requirements prior to product marketing and use. Post-development, a variety of mechanisms in the law provide guardrails for careful deployment into clinical practice that can also incentivize product improvement. This review provides an overview of potential areas of liability arising from AI/ML including malpractice, informed consent, corporate liability, and products liability. Finally, this review summarizes strategies to minimize risk and promote safe and reliable AI/ML.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
0.90
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信