An Evolutionary Attack for Revealing Training Data of DNNs With Higher Feature Fidelity

IF 7 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE
Zipeng Ye, Wenjian Luo, Ruizhuo Zhang, Hongwei Zhang, Yuhui Shi, Yan Jia
{"title":"An Evolutionary Attack for Revealing Training Data of DNNs With Higher Feature Fidelity","authors":"Zipeng Ye, Wenjian Luo, Ruizhuo Zhang, Hongwei Zhang, Yuhui Shi, Yan Jia","doi":"10.1109/TDSC.2023.3347225","DOIUrl":null,"url":null,"abstract":"Model inversion attacks aim to reveal information about sensitive training data of AI models, which may lead to serious privacy leakage. However, existing attack methods have limitations in reconstructing training data with higher feature fidelity. In this article, we propose an evolutionary model inversion attack approach (EvoMI) and empirically demonstrate that combined with the systematic search in the multi-degree-of-freedom latent space of the generative model, the simple use of an evolutionary algorithm can effectively improve the attack performance. Concretely, at first, we search for latent vectors which can generate images close to the attack target in the latent space with low-degree of freedom. Generally, the low-freedom constraint will reduce the probability of getting a local optima compared to existing methods that directly search for latent vectors in the high-freedom space. Consequently, we introduce a mutation operation to expand the search domain, thus further reduce the possibility of obtaining a local optima. Finally, we treat the searched latent vectors as the initial values of the post-processing and relax the constraint to further optimize the latent vectors in a higher-freedom space. Our proposed method is conceptually simple and easy to implement, yet it achieves substantial improvements and outperforms the state-of-the-art methods significantly.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0000,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Dependable and Secure Computing","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1109/TDSC.2023.3347225","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

Model inversion attacks aim to reveal information about sensitive training data of AI models, which may lead to serious privacy leakage. However, existing attack methods have limitations in reconstructing training data with higher feature fidelity. In this article, we propose an evolutionary model inversion attack approach (EvoMI) and empirically demonstrate that combined with the systematic search in the multi-degree-of-freedom latent space of the generative model, the simple use of an evolutionary algorithm can effectively improve the attack performance. Concretely, at first, we search for latent vectors which can generate images close to the attack target in the latent space with low-degree of freedom. Generally, the low-freedom constraint will reduce the probability of getting a local optima compared to existing methods that directly search for latent vectors in the high-freedom space. Consequently, we introduce a mutation operation to expand the search domain, thus further reduce the possibility of obtaining a local optima. Finally, we treat the searched latent vectors as the initial values of the post-processing and relax the constraint to further optimize the latent vectors in a higher-freedom space. Our proposed method is conceptually simple and easy to implement, yet it achieves substantial improvements and outperforms the state-of-the-art methods significantly.
揭示具有更高特征保真度的 DNN 训练数据的进化攻击
模型反转攻击旨在揭示人工智能模型的敏感训练数据信息,这可能会导致严重的隐私泄露。然而,现有的攻击方法在重建特征保真度更高的训练数据时存在局限性。在本文中,我们提出了一种进化模型反转攻击方法(EvoMI),并通过实证证明,结合在生成模型的多自由度潜空间中的系统搜索,简单使用进化算法就能有效提高攻击性能。具体来说,首先,我们在低自由度的潜空间中搜索能够生成接近攻击目标图像的潜向量。一般来说,与直接在高自由度空间中搜索潜向量的现有方法相比,低自由度约束会降低获得局部最优的概率。因此,我们引入了突变操作来扩展搜索域,从而进一步降低获得局部最优的可能性。最后,我们将搜索到的潜向量作为后处理的初始值,并放松约束,进一步优化高自由空间中的潜向量。我们提出的方法概念简单、易于实现,但却能实现实质性改进,其性能明显优于最先进的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
IEEE Transactions on Dependable and Secure Computing
IEEE Transactions on Dependable and Secure Computing 工程技术-计算机:软件工程
CiteScore
11.20
自引率
5.50%
发文量
354
审稿时长
9 months
期刊介绍: The "IEEE Transactions on Dependable and Secure Computing (TDSC)" is a prestigious journal that publishes high-quality, peer-reviewed research in the field of computer science, specifically targeting the development of dependable and secure computing systems and networks. This journal is dedicated to exploring the fundamental principles, methodologies, and mechanisms that enable the design, modeling, and evaluation of systems that meet the required levels of reliability, security, and performance. The scope of TDSC includes research on measurement, modeling, and simulation techniques that contribute to the understanding and improvement of system performance under various constraints. It also covers the foundations necessary for the joint evaluation, verification, and design of systems that balance performance, security, and dependability. By publishing archival research results, TDSC aims to provide a valuable resource for researchers, engineers, and practitioners working in the areas of cybersecurity, fault tolerance, and system reliability. The journal's focus on cutting-edge research ensures that it remains at the forefront of advancements in the field, promoting the development of technologies that are critical for the functioning of modern, complex systems.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信