Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI.

IF 2.8 Q2 MATHEMATICAL & COMPUTATIONAL BIOLOGY
Frontiers in bioinformatics Pub Date : 2023-07-05 eCollection Date: 2023-01-01 DOI:10.3389/fbinf.2023.1194993
Adriano Lucieri, Andreas Dengel, Sheraz Ahmed
{"title":"Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI.","authors":"Adriano Lucieri, Andreas Dengel, Sheraz Ahmed","doi":"10.3389/fbinf.2023.1194993","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models' privacy.</p>","PeriodicalId":73066,"journal":{"name":"Frontiers in bioinformatics","volume":"3 ","pages":"1194993"},"PeriodicalIF":2.8000,"publicationDate":"2023-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10356902/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in bioinformatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fbinf.2023.1194993","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2023/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"MATHEMATICAL & COMPUTATIONAL BIOLOGY","Score":null,"Total":0}
引用次数: 0

Abstract

Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models' privacy.

Abstract Image

Abstract Image

Abstract Image

将理论转化为实践:评估基于概念的生物医学人工智能解释对隐私的影响。
人工智能(AI)在图像生成、图像分析和语言建模方面取得了显著成就,使数据驱动技术在现实世界的实际应用中越来越重要,有望提高人类用户的创造力和效率。然而,在基础设施和医疗保健等关系重大的领域部署人工智能,仍会引发对算法责任和安全性的担忧。新兴的可解释人工智能(XAI)领域在开发能让人类理解数据驱动模型所做决策的界面方面取得了长足进步。在这些方法中,基于概念的可解释性因其能够将解释与用户熟悉的高级概念相统一而脱颖而出。然而,对抗式机器学习的早期研究揭示,暴露模型解释会使受害模型更容易受到攻击。这是第一项在生物医学图像分析中研究和比较基于概念的解释对基于深度学习的人工智能模型的隐私影响的研究。在两个生物医学数据集(ISIC 和 EyePACS)和一个合成数据集(SCDB)上对三种不同的先进模型架构(ResNet50、NFNet、ConvNeXt)进行了广泛的隐私基准测试。在暴露出不同程度的基于归因和基于概念的解释的同时,系统地比较了成员推理攻击的成功率。研究结果表明,从理论上讲,与基线设置中的归因相比,基于概念的解释有可能使私有人工智能系统的脆弱性增加多达 16%。不过,研究表明,在更现实的攻击场景中,解释所造成的威胁实际上可以忽略不计。此外,还提供了可行的建议,以确保基于概念的 XAI 系统的安全部署。此外,还探讨了差异隐私(DP)对基于概念的解释质量的影响,揭示了DP在对解释能力产生负面影响的同时,也会对模型的隐私产生不利影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.60
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信