SoK: Security and Privacy Risks of Medical AI

Yuanhaur Chang, Han Liu, Evin Jaff, Chenyang Lu, Ning Zhang
{"title":"SoK: Security and Privacy Risks of Medical AI","authors":"Yuanhaur Chang, Han Liu, Evin Jaff, Chenyang Lu, Ning Zhang","doi":"arxiv-2409.07415","DOIUrl":null,"url":null,"abstract":"The integration of technology and healthcare has ushered in a new era where\nsoftware systems, powered by artificial intelligence and machine learning, have\nbecome essential components of medical products and services. While these\nadvancements hold great promise for enhancing patient care and healthcare\ndelivery efficiency, they also expose sensitive medical data and system\nintegrity to potential cyberattacks. This paper explores the security and\nprivacy threats posed by AI/ML applications in healthcare. Through a thorough\nexamination of existing research across a range of medical domains, we have\nidentified significant gaps in understanding the adversarial attacks targeting\nmedical AI systems. By outlining specific adversarial threat models for medical\nsettings and identifying vulnerable application domains, we lay the groundwork\nfor future research that investigates the security and resilience of AI-driven\nmedical systems. Through our analysis of different threat models and\nfeasibility studies on adversarial attacks in different medical domains, we\nprovide compelling insights into the pressing need for cybersecurity research\nin the rapidly evolving field of AI healthcare technology.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"34 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07415","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

The integration of technology and healthcare has ushered in a new era where software systems, powered by artificial intelligence and machine learning, have become essential components of medical products and services. While these advancements hold great promise for enhancing patient care and healthcare delivery efficiency, they also expose sensitive medical data and system integrity to potential cyberattacks. This paper explores the security and privacy threats posed by AI/ML applications in healthcare. Through a thorough examination of existing research across a range of medical domains, we have identified significant gaps in understanding the adversarial attacks targeting medical AI systems. By outlining specific adversarial threat models for medical settings and identifying vulnerable application domains, we lay the groundwork for future research that investigates the security and resilience of AI-driven medical systems. Through our analysis of different threat models and feasibility studies on adversarial attacks in different medical domains, we provide compelling insights into the pressing need for cybersecurity research in the rapidly evolving field of AI healthcare technology.
SOK:医疗人工智能的安全与隐私风险
技术与医疗保健的融合开创了一个新时代,由人工智能和机器学习驱动的软件系统已成为医疗产品和服务的重要组成部分。虽然这些进步为提高患者护理和医疗服务效率带来了巨大希望,但也使敏感的医疗数据和系统完整性面临潜在的网络攻击。本文探讨了 AI/ML 在医疗保健领域的应用所带来的安全和隐私威胁。通过深入研究一系列医疗领域的现有研究,我们发现了在理解针对医疗人工智能系统的对抗性攻击方面存在的重大差距。通过概述针对医疗环境的特定对抗性威胁模型和识别易受攻击的应用领域,我们为未来研究人工智能驱动的医疗系统的安全性和适应性奠定了基础。通过我们对不同威胁模型的分析和对不同医疗领域中对抗性攻击的可行性研究,我们提供了令人信服的见解,说明在快速发展的人工智能医疗保健技术领域中网络安全研究的迫切需求。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信