{"title":"Walking Backward to Ensure Risk Management of Large Language Models in Medicine.","authors":"Daria Onitiu, Sandra Wachter, Brent Mittelstadt","doi":"10.1017/jme.2025.10132","DOIUrl":null,"url":null,"abstract":"<p><p>This paper examines in what way providers of specialized Large Language Models (LLM) pre-trained and/or fine-tuned on medical data, conduct risk management, define, estimate, mitigate and monitor safety risks under the EU Medical Device Regulation (MDR). Using the example of an Artificial Intelligence (AI)-based medical device for lung cancer detection, we review the current risk management process in the MDR entailing a \"forward-walking\" approach for providers articulating the medical device's clear intended use, and moving on sequentially along the definition, mitigation, and monitoring of risks. We note that the forward-walking approach clashes with the MDR requirement for articulating an intended use, as well as circumvents providers reasoning around the risks of specialised LLMs. The forward-walking approach inadvertently introduces different intended users, new hazards for risk control and use cases, producing unclear and incomplete risk management for the safety of LLMs. Our contribution is that the MDR risk management framework requires a backward-walking logic. This concept, similar to the notion of \"backward-reasoning\" in computer science, entails sub-goals for providers to examine a system's intended user(s), risks of new hazards and different use cases and then reason around the task-specific options, inherent risks at scale and trade-offs for risk management.</p>","PeriodicalId":50165,"journal":{"name":"Journal of Law Medicine & Ethics","volume":" ","pages":"454-464"},"PeriodicalIF":1.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Law Medicine & Ethics","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1017/jme.2025.10132","RegionNum":4,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper examines in what way providers of specialized Large Language Models (LLM) pre-trained and/or fine-tuned on medical data, conduct risk management, define, estimate, mitigate and monitor safety risks under the EU Medical Device Regulation (MDR). Using the example of an Artificial Intelligence (AI)-based medical device for lung cancer detection, we review the current risk management process in the MDR entailing a "forward-walking" approach for providers articulating the medical device's clear intended use, and moving on sequentially along the definition, mitigation, and monitoring of risks. We note that the forward-walking approach clashes with the MDR requirement for articulating an intended use, as well as circumvents providers reasoning around the risks of specialised LLMs. The forward-walking approach inadvertently introduces different intended users, new hazards for risk control and use cases, producing unclear and incomplete risk management for the safety of LLMs. Our contribution is that the MDR risk management framework requires a backward-walking logic. This concept, similar to the notion of "backward-reasoning" in computer science, entails sub-goals for providers to examine a system's intended user(s), risks of new hazards and different use cases and then reason around the task-specific options, inherent risks at scale and trade-offs for risk management.
期刊介绍:
Material published in The Journal of Law, Medicine & Ethics (JLME) contributes to the educational mission of The American Society of Law, Medicine & Ethics, covering public health, health disparities, patient safety and quality of care, and biomedical science and research. It provides articles on such timely topics as health care quality and access, managed care, pain relief, genetics, child/maternal health, reproductive health, informed consent, assisted dying, ethics committees, HIV/AIDS, and public health. Symposium issues review significant policy developments, health law court decisions, and books.