Security and Privacy in Machine Learning for Health Systems: Strategies and Challenges.

Yearbook of medical informatics Pub Date : 2023-08-01 Epub Date: 2023-12-26 DOI:10.1055/s-0043-1768731
Erikson J de Aguiar, Caetano Traina, Agma J M Traina
{"title":"Security and Privacy in Machine Learning for Health Systems: Strategies and Challenges.","authors":"Erikson J de Aguiar, Caetano Traina, Agma J M Traina","doi":"10.1055/s-0043-1768731","DOIUrl":null,"url":null,"abstract":"<p><strong>Objectives: </strong>Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health.</p><p><strong>Methods: </strong>We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy.</p><p><strong>Results: </strong>Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy.</p><p><strong>Conclusions: </strong>In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.</p>","PeriodicalId":40027,"journal":{"name":"Yearbook of medical informatics","volume":"32 1","pages":"269-281"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10751106/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Yearbook of medical informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1055/s-0043-1768731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/12/26 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Objectives: Machine learning (ML) is a powerful asset to support physicians in decision-making procedures, providing timely answers. However, ML for health systems can suffer from security attacks and privacy violations. This paper investigates studies of security and privacy in ML for health.

Methods: We examine attacks, defenses, and privacy-preserving strategies, discussing their challenges. We conducted the following research protocol: starting a manual search, defining the search string, removing duplicated papers, filtering papers by title and abstract, then their full texts, and analyzing their contributions, including strategies and challenges. Finally, we collected and discussed 40 papers on attacks, defense, and privacy.

Results: Our findings identified the most employed strategies for each domain. We found trends in attacks, including universal adversarial perturbation (UAPs), generative adversarial network (GAN)-based attacks, and DeepFakes to generate malicious examples. Trends in defense are adversarial training, GAN-based strategies, and out-of-distribution (OOD) to identify and mitigate adversarial examples (AE). We found privacy-preserving strategies such as federated learning (FL), differential privacy, and combinations of strategies to enhance the FL. Challenges in privacy comprehend the development of attacks that bypass fine-tuning, defenses to calibrate models to improve their robustness, and privacy methods to enhance the FL strategy.

Conclusions: In conclusion, it is critical to explore security and privacy in ML for health, because it has grown risks and open vulnerabilities. Our study presents strategies and challenges to guide research to investigate issues about security and privacy in ML applied to health systems.

医疗系统机器学习中的安全与隐私:战略与挑战。
目的:机器学习(ML)是一种强大的资产,可在决策过程中为医生提供支持,并及时提供答案。然而,用于医疗系统的 ML 可能会受到安全攻击和隐私侵犯。本文将对用于医疗保健的 ML 的安全性和隐私性进行研究:我们研究了攻击、防御和隐私保护策略,并讨论了它们所面临的挑战。我们采用了以下研究方案:开始手动搜索,定义搜索字符串,删除重复的论文,根据标题和摘要筛选论文,然后是论文全文,分析论文的贡献,包括策略和挑战。最后,我们收集并讨论了 40 篇关于攻击、防御和隐私的论文:我们的研究结果确定了每个领域最常用的策略。我们发现了攻击方面的趋势,包括通用对抗扰动(UAP)、基于生成式对抗网络(GAN)的攻击以及生成恶意示例的 DeepFakes。防御方面的趋势包括对抗性训练、基于 GAN 的策略以及用于识别和减轻对抗性示例 (AE) 的分布外 (OOD)。我们发现了一些保护隐私的策略,如联合学习(FL)、差分隐私以及增强联合学习的策略组合。隐私保护面临的挑战包括:开发绕过微调的攻击、校准模型以提高其鲁棒性的防御措施,以及增强FL策略的隐私保护方法:总之,探索用于健康的 ML 的安全性和隐私性至关重要,因为它存在越来越多的风险和开放性漏洞。我们的研究提出了指导研究的策略和挑战,以调查应用于卫生系统的人工智能的安全和隐私问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Yearbook of medical informatics
Yearbook of medical informatics Medicine-Medicine (all)
CiteScore
4.10
自引率
0.00%
发文量
20
期刊介绍: Published by the International Medical Informatics Association, this annual publication includes the best papers in medical informatics from around the world.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信