Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care.

IF 8.1 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Tugba Akinci D'Antonoli, Ali S Tejani, Bardia Khosravi, Christian Bluethgen, Felix Busch, Keno K Bressem, Lisa Christine Adams, Mana Moassefi, Shahriar Faghani, Judy Wawira Gichoya
{"title":"Cybersecurity Threats and Mitigation Strategies for Large Language Models in Health Care.","authors":"Tugba Akinci D'Antonoli, Ali S Tejani, Bardia Khosravi, Christian Bluethgen, Felix Busch, Keno K Bressem, Lisa Christine Adams, Mana Moassefi, Shahriar Faghani, Judy Wawira Gichoya","doi":"10.1148/ryai.240739","DOIUrl":null,"url":null,"abstract":"<p><p><i>\"Just Accepted\" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> The integration of large language models (LLMs) into health care offers tremendous opportunities to improve medical practice and patient care. Besides being susceptible to biases and threats common to all artificial intelligence systems, LLMs pose unique cybersecurity risks that must be carefully evaluated before these AI models are deployed in health care. LLMs can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data. Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors. This report presents the cybersecurity challenges posed by LLMs in health care and provides strategies for mitigation. By implementing robust security measures and adhering to best practices during the model development, training, and deployment stages, stakeholders can help minimize these risks and protect patient privacy. ©RSNA, 2025.</p>","PeriodicalId":29787,"journal":{"name":"Radiology-Artificial Intelligence","volume":" ","pages":"e240739"},"PeriodicalIF":8.1000,"publicationDate":"2025-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Radiology-Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1148/ryai.240739","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. The integration of large language models (LLMs) into health care offers tremendous opportunities to improve medical practice and patient care. Besides being susceptible to biases and threats common to all artificial intelligence systems, LLMs pose unique cybersecurity risks that must be carefully evaluated before these AI models are deployed in health care. LLMs can be exploited in several ways, such as malicious attacks, privacy breaches, and unauthorized manipulation of patient data. Moreover, malicious actors could use LLMs to infer sensitive patient information from training data. Furthermore, manipulated or poisoned data fed into these models could change their results in a way that is beneficial for the malicious actors. This report presents the cybersecurity challenges posed by LLMs in health care and provides strategies for mitigation. By implementing robust security measures and adhering to best practices during the model development, training, and deployment stages, stakeholders can help minimize these risks and protect patient privacy. ©RSNA, 2025.

医疗保健中大型语言模型的网络安全威胁和缓解策略。
“刚刚接受”的论文经过了全面的同行评审,并已被接受发表在《放射学:人工智能》杂志上。这篇文章将经过编辑,布局和校样审查,然后在其最终版本出版。请注意,在最终编辑文章的制作过程中,可能会发现可能影响内容的错误。将大型语言模型(llm)集成到医疗保健中为改善医疗实践和患者护理提供了巨大的机会。除了容易受到所有人工智能系统共同存在的偏见和威胁之外,法学硕士还带来了独特的网络安全风险,在将这些人工智能模型部署到医疗保健领域之前,必须仔细评估这些风险。llm可以以多种方式被利用,例如恶意攻击、隐私泄露和未经授权的患者数据操纵。此外,恶意行为者可以使用llm从训练数据中推断出敏感的患者信息。此外,输入到这些模型中的被操纵或有毒的数据可能会以一种有利于恶意行为者的方式改变它们的结果。本报告介绍了医疗保健法学硕士带来的网络安全挑战,并提供了缓解策略。通过在模型开发、培训和部署阶段实施健壮的安全措施并遵循最佳实践,涉众可以帮助最小化这些风险并保护患者隐私。©RSNA, 2025年。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
16.20
自引率
1.00%
发文量
0
期刊介绍: Radiology: Artificial Intelligence is a bi-monthly publication that focuses on the emerging applications of machine learning and artificial intelligence in the field of imaging across various disciplines. This journal is available online and accepts multiple manuscript types, including Original Research, Technical Developments, Data Resources, Review articles, Editorials, Letters to the Editor and Replies, Special Reports, and AI in Brief.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信