Eva Weicken, Mirja Mittermaier, Thomas Hoeren, Juliana Kliesch, Thomas Wiegand, Martin Witzenrath, Miriam Ballhausen, Christian Karagiannidis, Leif Erik Sander, Matthias I Gröschel
{"title":"[Focus: artificial intelligence in medicine-Legal aspects of using large language models in clinical practice].","authors":"Eva Weicken, Mirja Mittermaier, Thomas Hoeren, Juliana Kliesch, Thomas Wiegand, Martin Witzenrath, Miriam Ballhausen, Christian Karagiannidis, Leif Erik Sander, Matthias I Gröschel","doi":"10.1007/s00108-025-01861-0","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>The use of artificial intelligence (AI) and natural language processing (NLP) methods in medicine, particularly large language models (LLMs), offers opportunities to advance the healthcare system and patient care in Germany. LLMs have recently gained importance, but their practical application in hospitals and practices has so far been limited. Research and implementation are hampered by a complex legal situation. It is essential to research LLMs in clinical studies in Germany and to develop guidelines for users.</p><p><strong>Objective: </strong>How can foundations for the data protection-compliant use of LLMs, particularly cloud-based LLMs, be established in the German healthcare system? The aim of this work is to present the data protection aspects of using cloud-based LLMs in clinical research and patient care in Germany and the European Union (EU); to this end, key statements of a legal opinion on this matter are considered. Insofar as the requirements for use are regulated by state laws (vs. federal laws), the legal situation in Berlin is used as a basis.</p><p><strong>Materials and methods: </strong>As part of a research project, a legal opinion was commissioned to clarify the data protection aspects of the use of LLMs with cloud-based solutions at the Charité - University Hospital Berlin, Germany. Specific questions regarding the processing of personal data were examined.</p><p><strong>Results: </strong>The legal framework varies depending on the type of data processing and the relevant federal state (Bundesland). For anonymous data, data protection requirements need not apply. Where personal data is processed, it should be pseudonymized if possible. In the research context, patient consent is usually required to process their personal data, and data processing agreements must be concluded with the providers. Recommendations originating from LLMs must always be reviewed by medical doctors.</p><p><strong>Conclusions: </strong>The use of cloud-based LLMs is possible as long as data protection requirements are observed. The legal framework is complex and requires transparency from providers. Future developments could increase the potential of AI and particularly LLMs in everyday clinical practice; however, clear legal and ethical guidelines are necessary.</p>","PeriodicalId":73385,"journal":{"name":"Innere Medizin (Heidelberg, Germany)","volume":" ","pages":"436-441"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11965224/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Innere Medizin (Heidelberg, Germany)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s00108-025-01861-0","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/3/14 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Background: The use of artificial intelligence (AI) and natural language processing (NLP) methods in medicine, particularly large language models (LLMs), offers opportunities to advance the healthcare system and patient care in Germany. LLMs have recently gained importance, but their practical application in hospitals and practices has so far been limited. Research and implementation are hampered by a complex legal situation. It is essential to research LLMs in clinical studies in Germany and to develop guidelines for users.
Objective: How can foundations for the data protection-compliant use of LLMs, particularly cloud-based LLMs, be established in the German healthcare system? The aim of this work is to present the data protection aspects of using cloud-based LLMs in clinical research and patient care in Germany and the European Union (EU); to this end, key statements of a legal opinion on this matter are considered. Insofar as the requirements for use are regulated by state laws (vs. federal laws), the legal situation in Berlin is used as a basis.
Materials and methods: As part of a research project, a legal opinion was commissioned to clarify the data protection aspects of the use of LLMs with cloud-based solutions at the Charité - University Hospital Berlin, Germany. Specific questions regarding the processing of personal data were examined.
Results: The legal framework varies depending on the type of data processing and the relevant federal state (Bundesland). For anonymous data, data protection requirements need not apply. Where personal data is processed, it should be pseudonymized if possible. In the research context, patient consent is usually required to process their personal data, and data processing agreements must be concluded with the providers. Recommendations originating from LLMs must always be reviewed by medical doctors.
Conclusions: The use of cloud-based LLMs is possible as long as data protection requirements are observed. The legal framework is complex and requires transparency from providers. Future developments could increase the potential of AI and particularly LLMs in everyday clinical practice; however, clear legal and ethical guidelines are necessary.