医学大语言模型风险管理的逆向探索

IF 1.7 4区 哲学 Q2 ETHICS
Daria Onitiu, Sandra Wachter, Brent Mittelstadt
{"title":"医学大语言模型风险管理的逆向探索","authors":"Daria Onitiu, Sandra Wachter, Brent Mittelstadt","doi":"10.1017/jme.2025.10132","DOIUrl":null,"url":null,"abstract":"<p><p>This paper examines in what way providers of specialized Large Language Models (LLM) pre-trained and/or fine-tuned on medical data, conduct risk management, define, estimate, mitigate and monitor safety risks under the EU Medical Device Regulation (MDR). Using the example of an Artificial Intelligence (AI)-based medical device for lung cancer detection, we review the current risk management process in the MDR entailing a \"forward-walking\" approach for providers articulating the medical device's clear intended use, and moving on sequentially along the definition, mitigation, and monitoring of risks. We note that the forward-walking approach clashes with the MDR requirement for articulating an intended use, as well as circumvents providers reasoning around the risks of specialised LLMs. The forward-walking approach inadvertently introduces different intended users, new hazards for risk control and use cases, producing unclear and incomplete risk management for the safety of LLMs. Our contribution is that the MDR risk management framework requires a backward-walking logic. This concept, similar to the notion of \"backward-reasoning\" in computer science, entails sub-goals for providers to examine a system's intended user(s), risks of new hazards and different use cases and then reason around the task-specific options, inherent risks at scale and trade-offs for risk management.</p>","PeriodicalId":50165,"journal":{"name":"Journal of Law Medicine & Ethics","volume":" ","pages":"454-464"},"PeriodicalIF":1.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Walking Backward to Ensure Risk Management of Large Language Models in Medicine.\",\"authors\":\"Daria Onitiu, Sandra Wachter, Brent Mittelstadt\",\"doi\":\"10.1017/jme.2025.10132\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>This paper examines in what way providers of specialized Large Language Models (LLM) pre-trained and/or fine-tuned on medical data, conduct risk management, define, estimate, mitigate and monitor safety risks under the EU Medical Device Regulation (MDR). Using the example of an Artificial Intelligence (AI)-based medical device for lung cancer detection, we review the current risk management process in the MDR entailing a \\\"forward-walking\\\" approach for providers articulating the medical device's clear intended use, and moving on sequentially along the definition, mitigation, and monitoring of risks. We note that the forward-walking approach clashes with the MDR requirement for articulating an intended use, as well as circumvents providers reasoning around the risks of specialised LLMs. The forward-walking approach inadvertently introduces different intended users, new hazards for risk control and use cases, producing unclear and incomplete risk management for the safety of LLMs. Our contribution is that the MDR risk management framework requires a backward-walking logic. This concept, similar to the notion of \\\"backward-reasoning\\\" in computer science, entails sub-goals for providers to examine a system's intended user(s), risks of new hazards and different use cases and then reason around the task-specific options, inherent risks at scale and trade-offs for risk management.</p>\",\"PeriodicalId\":50165,\"journal\":{\"name\":\"Journal of Law Medicine & Ethics\",\"volume\":\" \",\"pages\":\"454-464\"},\"PeriodicalIF\":1.7000,\"publicationDate\":\"2025-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Law Medicine & Ethics\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.1017/jme.2025.10132\",\"RegionNum\":4,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ETHICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Law Medicine & Ethics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1017/jme.2025.10132","RegionNum":4,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0

摘要

本文探讨了在欧盟医疗器械法规(MDR)下,专业大型语言模型(LLM)的提供商以何种方式对医疗数据进行预训练和/或微调,进行风险管理,定义、估计、减轻和监控安全风险。以用于肺癌检测的基于人工智能(AI)的医疗设备为例,我们回顾了MDR中当前的风险管理流程,其中涉及供应商明确医疗设备的预期用途的“向前走”方法,并依次沿着风险的定义、缓解和监测进行。我们注意到,向前走的方法与阐明预期用途的MDR要求相冲突,并且规避了提供商对专业llm风险的推理。向前走的方法无意中引入了不同的目标用户,风险控制和用例的新危害,导致llm安全性风险管理不明确和不完整。我们的贡献是,MDR风险管理框架需要一种回溯逻辑。这个概念类似于计算机科学中的“反向推理”概念,它为供应商提供了子目标,以检查系统的预期用户、新危害的风险和不同的用例,然后围绕特定任务的选项、规模上的固有风险和风险管理的权衡进行推理。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Walking Backward to Ensure Risk Management of Large Language Models in Medicine.

This paper examines in what way providers of specialized Large Language Models (LLM) pre-trained and/or fine-tuned on medical data, conduct risk management, define, estimate, mitigate and monitor safety risks under the EU Medical Device Regulation (MDR). Using the example of an Artificial Intelligence (AI)-based medical device for lung cancer detection, we review the current risk management process in the MDR entailing a "forward-walking" approach for providers articulating the medical device's clear intended use, and moving on sequentially along the definition, mitigation, and monitoring of risks. We note that the forward-walking approach clashes with the MDR requirement for articulating an intended use, as well as circumvents providers reasoning around the risks of specialised LLMs. The forward-walking approach inadvertently introduces different intended users, new hazards for risk control and use cases, producing unclear and incomplete risk management for the safety of LLMs. Our contribution is that the MDR risk management framework requires a backward-walking logic. This concept, similar to the notion of "backward-reasoning" in computer science, entails sub-goals for providers to examine a system's intended user(s), risks of new hazards and different use cases and then reason around the task-specific options, inherent risks at scale and trade-offs for risk management.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Journal of Law Medicine & Ethics
Journal of Law Medicine & Ethics 医学-医学:法
CiteScore
2.90
自引率
4.80%
发文量
70
审稿时长
6-12 weeks
期刊介绍: Material published in The Journal of Law, Medicine & Ethics (JLME) contributes to the educational mission of The American Society of Law, Medicine & Ethics, covering public health, health disparities, patient safety and quality of care, and biomedical science and research. It provides articles on such timely topics as health care quality and access, managed care, pain relief, genetics, child/maternal health, reproductive health, informed consent, assisted dying, ethics committees, HIV/AIDS, and public health. Symposium issues review significant policy developments, health law court decisions, and books.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信