实践中的隐私:x射线图像中的私人COVID-19检测

Lucas Lange, Maja Schneider, E. Rahm
{"title":"实践中的隐私:x射线图像中的私人COVID-19检测","authors":"Lucas Lange, Maja Schneider, E. Rahm","doi":"10.5220/0012048100003555","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) can help fight the COVID-19 pandemic by enabling rapid screening of large volumes of chest X-ray images. To perform such data analysis while maintaining patient privacy, we create ML models that satisfy Differential Privacy (DP). Previous works exploring private COVID-19 ML models are in part based on small or skewed datasets, are lacking in their privacy guarantees, and do not investigate practical privacy. In this work, we therefore suggest several improvements to address these open gaps. We account for inherent class imbalances in the data and evaluate the utility-privacy trade-off more extensively and over stricter privacy budgets than in previous work. Our evaluation is supported by empirically estimating practical privacy leakage through actual attacks. Based on theory, the introduced DP should help limit and mitigate information leakage threats posed by black-box Membership Inference Attacks (MIAs). Our practical privacy analysis is the first to test this hypothesis on the COVID-19 detection task. In addition, we also re-examine the evaluation on the MNIST database. Our results indicate that based on the task-dependent threat from MIAs, DP does not always improve practical privacy, which we show on the COVID-19 task. The results further suggest that with increasing DP guarantees, empirical privacy leakage reaches an early plateau and DP therefore appears to have a limited impact on MIA defense. Our findings identify possibilities for better utility-privacy trade-offs, and we thus believe that empirical attack-specific privacy estimation can play a vital role in tuning for practical privacy.","PeriodicalId":74779,"journal":{"name":"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography","volume":"1 1","pages":"624-633"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Privacy in Practice: Private COVID-19 Detection in X-Ray Images\",\"authors\":\"Lucas Lange, Maja Schneider, E. Rahm\",\"doi\":\"10.5220/0012048100003555\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning (ML) can help fight the COVID-19 pandemic by enabling rapid screening of large volumes of chest X-ray images. To perform such data analysis while maintaining patient privacy, we create ML models that satisfy Differential Privacy (DP). Previous works exploring private COVID-19 ML models are in part based on small or skewed datasets, are lacking in their privacy guarantees, and do not investigate practical privacy. In this work, we therefore suggest several improvements to address these open gaps. We account for inherent class imbalances in the data and evaluate the utility-privacy trade-off more extensively and over stricter privacy budgets than in previous work. Our evaluation is supported by empirically estimating practical privacy leakage through actual attacks. Based on theory, the introduced DP should help limit and mitigate information leakage threats posed by black-box Membership Inference Attacks (MIAs). Our practical privacy analysis is the first to test this hypothesis on the COVID-19 detection task. In addition, we also re-examine the evaluation on the MNIST database. Our results indicate that based on the task-dependent threat from MIAs, DP does not always improve practical privacy, which we show on the COVID-19 task. The results further suggest that with increasing DP guarantees, empirical privacy leakage reaches an early plateau and DP therefore appears to have a limited impact on MIA defense. Our findings identify possibilities for better utility-privacy trade-offs, and we thus believe that empirical attack-specific privacy estimation can play a vital role in tuning for practical privacy.\",\"PeriodicalId\":74779,\"journal\":{\"name\":\"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography\",\"volume\":\"1 1\",\"pages\":\"624-633\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5220/0012048100003555\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SECRYPT ... : proceedings of the International Conference on Security and Cryptography. International Conference on Security and Cryptography","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5220/0012048100003555","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

机器学习(ML)可以通过快速筛查大量胸部x射线图像来帮助抗击COVID-19大流行。为了在维护患者隐私的同时执行此类数据分析,我们创建了满足差分隐私(DP)的ML模型。以前探索私人COVID-19 ML模型的工作部分基于小型或倾斜的数据集,缺乏隐私保障,并且没有调查实际隐私。因此,在这项工作中,我们提出了一些改进建议,以解决这些开放的差距。我们考虑了数据中固有的阶级不平衡,并比以前的工作更广泛地评估了效用与隐私之间的权衡,并且采用了更严格的隐私预算。我们的评估得到了通过实际攻击进行的实际隐私泄漏的经验估计的支持。从理论上讲,引入的DP应该有助于限制和减轻黑盒成员推理攻击(mia)带来的信息泄漏威胁。我们的实际隐私分析首次在COVID-19检测任务中验证了这一假设。此外,我们还重新审视了MNIST数据库上的评价。我们的研究结果表明,基于mia的任务依赖威胁,DP并不总能提高实际隐私,我们在COVID-19任务中证明了这一点。结果进一步表明,随着DP保证的增加,经验隐私泄漏达到早期平台期,因此DP对MIA防御的影响有限。我们的发现确定了更好的效用-隐私权衡的可能性,因此我们相信,经验攻击特定的隐私估计可以在调整实际隐私方面发挥至关重要的作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Privacy in Practice: Private COVID-19 Detection in X-Ray Images
Machine learning (ML) can help fight the COVID-19 pandemic by enabling rapid screening of large volumes of chest X-ray images. To perform such data analysis while maintaining patient privacy, we create ML models that satisfy Differential Privacy (DP). Previous works exploring private COVID-19 ML models are in part based on small or skewed datasets, are lacking in their privacy guarantees, and do not investigate practical privacy. In this work, we therefore suggest several improvements to address these open gaps. We account for inherent class imbalances in the data and evaluate the utility-privacy trade-off more extensively and over stricter privacy budgets than in previous work. Our evaluation is supported by empirically estimating practical privacy leakage through actual attacks. Based on theory, the introduced DP should help limit and mitigate information leakage threats posed by black-box Membership Inference Attacks (MIAs). Our practical privacy analysis is the first to test this hypothesis on the COVID-19 detection task. In addition, we also re-examine the evaluation on the MNIST database. Our results indicate that based on the task-dependent threat from MIAs, DP does not always improve practical privacy, which we show on the COVID-19 task. The results further suggest that with increasing DP guarantees, empirical privacy leakage reaches an early plateau and DP therefore appears to have a limited impact on MIA defense. Our findings identify possibilities for better utility-privacy trade-offs, and we thus believe that empirical attack-specific privacy estimation can play a vital role in tuning for practical privacy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信