IEEE Transactions on Information Forensics and Security最新文献

筛选
英文 中文
Grey-Box Adversarial Attack on Communication in Communicative Multi-Agent Reinforcement Learning 通信多智能体强化学习中通信的灰盒对抗攻击
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-11 DOI: 10.1109/TIFS.2025.3560203
Xiao Ma;Wu-Jun Li
{"title":"Grey-Box Adversarial Attack on Communication in Communicative Multi-Agent Reinforcement Learning","authors":"Xiao Ma;Wu-Jun Li","doi":"10.1109/TIFS.2025.3560203","DOIUrl":"10.1109/TIFS.2025.3560203","url":null,"abstract":"Effective communication is a necessary condition for intelligent agents to collaborate in multi-agent environments. Although increasing attention has been paid to communicative multi-agent reinforcement learning (CMARL), the vulnerability of the communication mechanism in CMARL has not been well investigated, especially when there exist malicious agents that send adversarial communication messages to other regular agents. Existing works about adversarial communication in CMARL focus on black-box attacks where the attacker cannot access any model within the multi-agent system (MAS). However, grey-box attacks are a type of more practical attack, where the attacker has access to the models of its controlled agents. To the best of our knowledge, no research has been conducted to investigate grey-box attacks on communication in CMARL. In this paper, we propose the first grey-box attack method on communication in CMARL, which is called victim-simulation based adversarial attack (VSAA). At each timestep, the attacker simulates a victim attacked by other regular agents’ communication messages and generates adversarial perturbations on its received communication messages. The attacker then sends the aggregation of these perturbations to the regular agents through communication messages, which will induce non-optimal actions of the regular agents and subsequently degrade the performance of the MAS. Experimental results on multiple tasks show that VSAA can effectively degrade the performance of the MAS. The findings in this paper will make researchers aware of the grey-box attack in CMARL.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4679-4693"},"PeriodicalIF":6.3,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
De-Anonymizing Monero: A Maximum Weighted Matching-Based Approach 去匿名化门罗币:基于最大加权匹配的方法
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-11 DOI: 10.1109/TIFS.2025.3560193
Xingyu Yang;Lei Xu;Liehuang Zhu
{"title":"De-Anonymizing Monero: A Maximum Weighted Matching-Based Approach","authors":"Xingyu Yang;Lei Xu;Liehuang Zhu","doi":"10.1109/TIFS.2025.3560193","DOIUrl":"10.1109/TIFS.2025.3560193","url":null,"abstract":"As the leading privacy coin, Monero is widely recognized for its high level of anonymity. Monero utilizes linkable ring signature to hide the sender of a transaction. Although the anonymity is preferred by users, it poses challenges for authorities seeking to regulate financial activities. Researchers are actively engaged in studying methods to de-anonymize Monero. Previous methods usually relied on a specific type of ring called zero-mixin ring. However, these methods have become ineffective after Monero enforced the minimum ringsize. In this paper, we propose a novel approach based on maximum weighted matching to de-anonymize Monero. The proposed approach does not rely on the existence of zero-mixin rings. Specifically, we construct a weighted bipartite graph to represent the relationship between rings and transaction outputs. Based on the empirical probability distribution derived from users’ spending patterns, three weighting methods are proposed. Accordingly, we transform the de-anonymization problem into a maximum weight matching (MWM) problem. Due to the scale of the graph, traditional algorithms for solving the MWM problem are not applicable. Instead, we propose a deep reinforcement learning-based algorithm that achieves near-optimal results. Experimental results on both real-world dataset and synthetic dataset demonstrate the effectiveness of the proposed approach.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4726-4738"},"PeriodicalIF":6.3,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Revisiting Location Privacy in MEC-Enabled Computation Offloading 重新审视 MEC 计算卸载中的位置隐私问题
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-10 DOI: 10.1109/TIFS.2025.3558593
Jingyi Li;Wenzhong Ou;Bei Ouyang;Shengyuan Ye;Liekang Zeng;Lin Chen;Xu Chen
{"title":"Revisiting Location Privacy in MEC-Enabled Computation Offloading","authors":"Jingyi Li;Wenzhong Ou;Bei Ouyang;Shengyuan Ye;Liekang Zeng;Lin Chen;Xu Chen","doi":"10.1109/TIFS.2025.3558593","DOIUrl":"10.1109/TIFS.2025.3558593","url":null,"abstract":"Mobile Edge Computing (MEC) revolutionizes real-time applications by extending cloud capabilities to network edges, enabling efficient computation offloading from mobile devices. In recent years, the location privacy concern within MEC offloading has been recognized, prompting the proposal of various methodologies to mitigate this concern. However, this paper demonstrates that the prevailing privacy protection methods exhibit vulnerabilities. First, we analyze the shortcomings of current methodologies through both system modeling and evaluation metrics. Then, we introduce a Learning-based Trajectory Reconstruction Attack (LTRA) to expose the weaknesses, achieving up to 91.2% reconstruction accuracy against the state-of-the-art protection method. Further, based on <italic>w</i>-event differential privacy, we propose an <inline-formula> <tex-math>$ell $ </tex-math></inline-formula>-trajectory differentially private mechanism, i.e., OffloadingBD. Compared to the existing works, OffloadingBD provides more flexible and enhanced protection with sound privacy theoretical guarantee. Lastly, we conduct extensive experiments to evaluate LTRA and OffloadingBD. The experiment results show that LTRA has good generalization ability and OffloadingBD showcases a superior balance between privacy and utility compared with baselines.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4396-4407"},"PeriodicalIF":6.3,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143819502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Flexible Secure Biometrics: A Protected Modality-Invariant Face-Periocular Recognition System 柔性安全生物识别:一种保护模态不变的人脸眼周识别系统
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-10 DOI: 10.1109/TIFS.2025.3559785
Tiong-Sik Ng;Jihyeon Kim;Andrew Beng Jin Teoh
{"title":"Flexible Secure Biometrics: A Protected Modality-Invariant Face-Periocular Recognition System","authors":"Tiong-Sik Ng;Jihyeon Kim;Andrew Beng Jin Teoh","doi":"10.1109/TIFS.2025.3559785","DOIUrl":"https://doi.org/10.1109/TIFS.2025.3559785","url":null,"abstract":"This paper introduces Flexible Secure Biometrics (FSB), a novel learning framework that protects biometric templates across face-periocular modalities in intra- and cross-modality recognition tasks. The increasing flexibility of biometric recognition systems, which can match multiple template modalities, also escalates the security risks of tampering and misuse. To address these challenges, we propose the FSB-HashNet architecture, which integrates two key components: a periocular-face feature extractor and an adversarial hash generator. The feature extractor identifies and emphasizes shared prominent features between periocular and face modalities, creating modality-invariant representations. Meanwhile, the adversarial network simultaneously generates secure hash codes and ensures alignment across different modalities, preserving modality-invariant characteristics. The FSB-HashNet employs a two-factor protection mechanism using a subject’s biometric data and a user-specific key, resulting in robust, protected hash codes that offer image-level security without compromising recognition performance. Our comprehensive experiments on diverse, in-the-wild datasets under open-set conditions demonstrate the framework’s ability to maintain key security properties—unlinkability, revocability, and non-invertibility while preserving decent recognition accuracy. Codes are publicly available at <uri>https://github.com/tiongsikng/fsb_hashnet</uri>","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4610-4621"},"PeriodicalIF":6.3,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143913313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hierarchical Cross-Modal Image Generation for Multimodal Biometric Recognition With Missing Modality 为缺失模态的多模态生物识别生成分层跨模态图像
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-10 DOI: 10.1109/TIFS.2025.3559802
Zaiyu Pan;Shuangtian Jiang;Xiao Yang;Hai Yuan;Jun Wang
{"title":"Hierarchical Cross-Modal Image Generation for Multimodal Biometric Recognition With Missing Modality","authors":"Zaiyu Pan;Shuangtian Jiang;Xiao Yang;Hai Yuan;Jun Wang","doi":"10.1109/TIFS.2025.3559802","DOIUrl":"10.1109/TIFS.2025.3559802","url":null,"abstract":"Multimodal biometric recognition has shown great potential in identity authentication tasks and has attracted increasing interest recently. Currently, most existing multimodal biometric recognition algorithms require test samples with complete multimodal data. However, it often encounters the problem of missing modality data and thus suffers severe performance degradation in practical scenarios. To this end, we proposed a hierarchical cross-modal image generation for palmprint and palmvein based multimodal biometric recognition with missing modality. First, a hierarchical cross-modal image generation model is designed to achieve the pixel alignment of different modalities and reconstruct the image information of missing modality. Specifically, a cross-modal texture transfer network is utilized to implement the texture style transformation between different modalities, and then a cross-modal structure generation network is proposed to establish the correlation mapping of structural information between different modalities. Second, multimodal dynamic sparse feature fusion model is presented to obtain more discriminative and reliable representations, which can also enhance the robustness of our proposed model to dynamic changes in image quality of different modalities. The proposed model is evaluated on three multimodal biometric benchmark datasets, and experimental results demonstrate that our proposed model outperforms recent mainstream incomplete multimodal learning models.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4308-4321"},"PeriodicalIF":6.3,"publicationDate":"2025-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143819438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Networked Control Systems Resilience Against DoS Attacks: A Data-Driven Approach With Adaptive Sampled-Data and Compression 增强网络控制系统抵御DoS攻击的弹性:一种数据驱动的自适应采样数据和压缩方法
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-09 DOI: 10.1109/TIFS.2025.3559464
Xiao Cai;Yanbin Sun;Xiangpeng Xie;Nan Wei;Kaibo Shi;Huaicheng Yan;Zhihong Tian
{"title":"Enhancing Networked Control Systems Resilience Against DoS Attacks: A Data-Driven Approach With Adaptive Sampled-Data and Compression","authors":"Xiao Cai;Yanbin Sun;Xiangpeng Xie;Nan Wei;Kaibo Shi;Huaicheng Yan;Zhihong Tian","doi":"10.1109/TIFS.2025.3559464","DOIUrl":"10.1109/TIFS.2025.3559464","url":null,"abstract":"This paper addresses the critical challenge of achieving asymptotic stability in networked control systems (NCSs) under denial-of-service (DoS) attacks, focusing on maintaining security and stability within bandwidth-constrained environments. First, we construct a practical attack model using the NSL-KDD dataset to provide a realistic representation of DoS attack dynamics, capturing key attributes such as attack duration and frequency. Then, an iterative shrinkage-thresholding algorithm (ISTA) is introduced to supervise the adaptive sampled-data controller (ADSC), dynamically optimizing the sampling period to enhance control performance while minimizing communication overhead. To further mitigate the impact of DoS attacks, we propose a novel data compression mechanism that adapts to varying network conditions, ensuring efficient bandwidth utilization and preserving critical control data fidelity. In addition, the stability of the NCSs is rigorously verified through Lyapunov-Krasovskii functions (LKFs), demonstrating robust system behavior even under adverse network conditions. Finally, the effectiveness and practicality of the proposed approach are validated through experimental studies on a 2-degree-of-freedom (2-DoF) helicopter system, confirming its capability to ensure stability, optimize communication efficiency, and mitigate the effects of DoS attacks in real-world scenarios.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4100-4109"},"PeriodicalIF":6.3,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LightGBM-Based Audio Watermarking Robust to Recapturing and Hybrid Attacks 基于lightgbm的音频水印对重捕获和混合攻击的鲁棒性
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-09 DOI: 10.1109/TIFS.2025.3559408
Zhaopin Su;Zhaofang Weng;Guofu Zhang;Chensi Lian;Niansong Wang
{"title":"LightGBM-Based Audio Watermarking Robust to Recapturing and Hybrid Attacks","authors":"Zhaopin Su;Zhaofang Weng;Guofu Zhang;Chensi Lian;Niansong Wang","doi":"10.1109/TIFS.2025.3559408","DOIUrl":"10.1109/TIFS.2025.3559408","url":null,"abstract":"Digital audio watermarking is a critical technology widely used for copyright protection, content authentication, and broadcast monitoring. However, its robustness is significantly challenged by recapturing and hybrid attacks, which can easily remove watermarks. To address this issue, this work proposes a novel scheme based on the light gradient boosting machine (LightGBM), named LRAW (LightGBM-based Robust Audio Watermarking), which is designed to increase the robustness of audio watermarking against various attacks. Specifically, the scheme begins by analysing coefficients derived from the discrete wavelet transform (DWT), graph-based transform (GBT), and singular value decomposition (SVD). The extracted singular values consistently maintain a stable descending order even under recapturing attacks at a slightly greater distance. Leveraging this stability, the watermark information is implicitly embedded into the audio signal using a quantization rule. To simulate a hybrid attack scenario, a comprehensive feature dataset comprising 396,000 pieces of DWT-GBT-SVD feature data is constructed based on 60 original recordings and 9 types of attack. Furthermore, considering the distinct influences of embedding watermark bits 0 and 1 on the quantization of singular values, the watermark extraction process is formulated as a binary classification problem. LightGBM is trained using Bayesian optimization and the feature dataset to classify the watermark bits accurately. Finally, the complete watermark is recovered using a watermark sequence matching algorithm. Theoretical analysis and experimental results demonstrate that the proposed LRAW scheme outperforms state-of-the-art watermarking methods in robustness against various recapturing and hybrid attacks, even when the distance between the acoustic source and the receiver is considerable.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4212-4227"},"PeriodicalIF":6.3,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Charge Your Clients: Payable Secure Computation and Its Applications 向客户收费:可支付的安全计算及其应用
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-09 DOI: 10.1109/TIFS.2025.3559456
Cong Zhang;Liqiang Peng;Weiran Liu;Shuaishuai Li;Meng Hao;Lei Zhang;Dongdai Lin
{"title":"Charge Your Clients: Payable Secure Computation and Its Applications","authors":"Cong Zhang;Liqiang Peng;Weiran Liu;Shuaishuai Li;Meng Hao;Lei Zhang;Dongdai Lin","doi":"10.1109/TIFS.2025.3559456","DOIUrl":"10.1109/TIFS.2025.3559456","url":null,"abstract":"The online realm has witnessed a surge in the buying and selling of data, prompting the emergence of dedicated data marketplaces. These platforms cater to servers (sellers), enabling them to set prices for access to their data, and clients (buyers), who can subsequently purchase these data, thereby streamlining and facilitating such transactions. However, the current data market is primarily confronted with the following issues. Firstly, they fail to protect client privacy, presupposing that clients submit their queries in plaintext. Secondly, these models are susceptible to being impacted by malicious client behavior, for example, enabling clients to potentially engage in arbitrage activities. To address the aforementioned issues, we propose payable secure computation, a novel secure computation paradigm specifically designed for data pricing scenarios. It grants the server the ability to securely procure essential pricing information while protecting the privacy of client queries. Additionally, it fortifies the server’s privacy against potential malicious client activities. As specific applications, we have devised customized payable protocols for two distinct secure computation scenarios: Keyword Private Information Retrieval (KPIR) and Private Set Intersection (PSI). We implement our two payable protocols and compare them with the state-of-the-art related protocols that do not support pricing as a baseline. Since our payable protocols are more powerful in the data pricing setting, the experiment results show that they do not introduce much overhead over the baseline protocols. Our payable KPIR achieves the same online cost as baseline, while the setup is about <inline-formula> <tex-math>$1.3-1.6times $ </tex-math></inline-formula> slower than it. Our payable PSI needs about <inline-formula> <tex-math>$2times $ </tex-math></inline-formula> more communication cost than that of baseline protocol, while the runtime is <inline-formula> <tex-math>$1.5-3.2times $ </tex-math></inline-formula> slower than it depending on the network setting.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4183-4195"},"PeriodicalIF":6.3,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
RaSA: Robust and Adaptive Secure Aggregation for Edge-Assisted Hierarchical Federated Learning RaSA:边缘辅助分层联合学习的稳健和自适应安全聚合
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-09 DOI: 10.1109/TIFS.2025.3559411
Lingling Wang;Mei Huang;Zhengyin Zhang;Meng Li;Jingjing Wang;Keke Gai
{"title":"RaSA: Robust and Adaptive Secure Aggregation for Edge-Assisted Hierarchical Federated Learning","authors":"Lingling Wang;Mei Huang;Zhengyin Zhang;Meng Li;Jingjing Wang;Keke Gai","doi":"10.1109/TIFS.2025.3559411","DOIUrl":"10.1109/TIFS.2025.3559411","url":null,"abstract":"Secure Aggregation (SA), in the Federated Learning (FL) setting, enables distributed clients to collaboratively learn a shared global model while keeping their raw data and local gradients private. However, when SA is implemented in edge-intelligence-driven FL, the open and heterogeneous environments will hinder model aggregation, slow down model convergence speed, and decrease model generalization ability. To address these issues, we present a Robust and adaptive Secure Aggregation (RaSA) protocol to guarantee robustness and privacy in the presence of non-IID data, heterogeneous system, and malicious edge servers. Specifically, we first design an adaptive weights updating strategy to address the non-IID data issue by considering the impact of both gradient similarity and gradient diversity on the model aggregation. Meanwhile, we enhance privacy protection by preventing privacy leakage from both gradients and aggregation weights. Different from previous work, we address system heterogeneity in the case of malicious attacks, and the malicious behavior from edge servers can be detected by the proposed verifiable approach. Moreover, we eliminate the influence of straggling communication links and dropouts on the model convergence by combining efficient product-coded computing with repetition-based secret sharing. Finally, we perform a theoretical analysis that proves the security of RaSA. Extensive experimental results show that RaSA can ensure model convergence without affecting the generalization ability under non-IID scenarios. Moreover, the decoding efficiency of RaSA achieves <inline-formula> <tex-math>$1.33times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$6.4times $ </tex-math></inline-formula> faster than the state-of-the-art product-coded and one-dimensional coded computing schemes.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4280-4295"},"PeriodicalIF":6.3,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143813885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Extracting Private Training Data in Federated Learning From Clients 联邦学习中私有训练数据的提取
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-04-08 DOI: 10.1109/TIFS.2025.3558581
Jiaheng Wei;Yanjun Zhang;Leo Yu Zhang;Chao Chen;Shirui Pan;Kok-Leong Ong;Jun Zhang;Yang Xiang
{"title":"Extracting Private Training Data in Federated Learning From Clients","authors":"Jiaheng Wei;Yanjun Zhang;Leo Yu Zhang;Chao Chen;Shirui Pan;Kok-Leong Ong;Jun Zhang;Yang Xiang","doi":"10.1109/TIFS.2025.3558581","DOIUrl":"10.1109/TIFS.2025.3558581","url":null,"abstract":"The utilization of machine learning algorithms in distributed web applications is experiencing significant growth. One notable approach is Federated Learning (FL) Recent research has brought attention to the vulnerability of FL to gradient inversion attacks, which seek to reconstruct the original training samples, posing a substantial threat to client privacy. Most existing gradient inversion attacks, however, require control over the central server and rely on substantial prior knowledge, including information about batch normalization and data distribution. In this study, we introduce Poisoning Gradient Leakage from Client (PGLC), a novel attack method that operates from the clients’ side. For the first time, we demonstrate the feasibility of a client-side adversary with limited knowledge successfully recovering training samples from the aggregated global model. Our approach enables the adversary to employ a malicious model that increases the loss of a specific targeted class of interest. When honest clients employ the poisoned global model, the gradients of samples become distinct in the aggregated update. This allows the adversary to effectively reconstruct private inputs from other clients using the aggregated update. Furthermore, our <sc>PGLC</small> attack exhibits stealthiness against Byzantine-robust aggregation rules (AGRs). Through the optimization of malicious updates and the blending of benign updates with a malicious replacement vector, our method remains undetected by these defense mechanisms. We conducted experiments across various benchmark datasets, considering representative Byzantine-robust AGRs and exploring different FL settings with varying levels of adversary knowledge about the data. Our results consistently demonstrate the ability of <sc>PGLC</small> to extract training data in all tested scenarios.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4525-4540"},"PeriodicalIF":6.3,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143805646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信