IEEE Transactions on Information Forensics and Security最新文献

筛选
英文 中文
Privacy-Preserving Generative Modeling With Sliced Wasserstein Distance 基于Wasserstein距离的隐私保护生成建模
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516549
Ziniu Liu;Han Yu;Kai Chen;Aiping Li
{"title":"Privacy-Preserving Generative Modeling With Sliced Wasserstein Distance","authors":"Ziniu Liu;Han Yu;Kai Chen;Aiping Li","doi":"10.1109/TIFS.2024.3516549","DOIUrl":"10.1109/TIFS.2024.3516549","url":null,"abstract":"Large models require larger datasets. While people gain from using massive amounts of data to train large models, they must be concerned about privacy issues. To address this issue, we propose a novel approach for private generative modeling using the Sliced Wasserstein Distance (SWD) metric in a Differential Private (DP) manner. We propose Normalized Clipping, a parameter-free clipping technique that generates higher-quality images. We demonstrate the advantages of Normalized Clipping over the traditional clipping method in parameter tuning and model performance through experiments. Moreover, experimental results indicate that our model outperforms previous methods in differentially private image generation tasks.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1011-1022"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hard Adversarial Example Mining for Improving Robust Fairness 提高鲁棒公平性的硬对抗示例挖掘
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516554
Chenhao Lin;Xiang Ji;Yulong Yang;Qian Li;Zhengyu Zhao;Zhe Peng;Run Wang;Liming Fang;Chao Shen
{"title":"Hard Adversarial Example Mining for Improving Robust Fairness","authors":"Chenhao Lin;Xiang Ji;Yulong Yang;Qian Li;Zhengyu Zhao;Zhe Peng;Run Wang;Liming Fang;Chao Shen","doi":"10.1109/TIFS.2024.3516554","DOIUrl":"10.1109/TIFS.2024.3516554","url":null,"abstract":"Adversarial training (AT) is widely considered the state-of-the-art technique for improving the robustness of deep neural networks (DNNs) against adversarial examples (AEs). Nevertheless, recent studies have revealed that adversarially trained models are prone to unfairness problems. Recent works in this field usually apply class-wise regularization methods to enhance the fairness of AT. However, this paper discovers that these paradigms can be sub-optimal in improving robust fairness. Specifically, we empirically observe that the AEs that are already robust (referred to as “easy AEs” in this paper) are useless and even harmful in improving robust fairness. To this end, we propose the hard adversarial example mining (HAM) technique which concentrates on mining hard AEs while discarding the easy AEs in AT. Specifically, HAM identifies the easy AEs and hard AEs with a fast adversarial attack method. By discarding the easy AEs and reweighting the hard AEs, the robust fairness of the model can be efficiently and effectively improved. Extensive experimental results on four image classification datasets demonstrate the improvement of HAM in robust fairness and training efficiency compared to several state-of-the-art fair adversarial training methods. Our code is available at \u0000<uri>https://github.com/yyl-github-1896/HAM</uri>\u0000.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"350-363"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attention Consistency Refined Masked Frequency Forgery Representation for Generalizing Face Forgery Detection 关注一致性改进掩码频率伪造表示泛化人脸伪造检测
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516561
Decheng Liu;Tao Chen;Chunlei Peng;Nannan Wang;Ruimin Hu;Xinbo Gao
{"title":"Attention Consistency Refined Masked Frequency Forgery Representation for Generalizing Face Forgery Detection","authors":"Decheng Liu;Tao Chen;Chunlei Peng;Nannan Wang;Ruimin Hu;Xinbo Gao","doi":"10.1109/TIFS.2024.3516561","DOIUrl":"10.1109/TIFS.2024.3516561","url":null,"abstract":"Due to the successful development of deep image generation technology, visual data forgery detection would play a more important role in social and economic security. Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain. In this paper, we propose a novel Attention Consistency Refined masked frequency forgery representation model toward a generalizing face forgery detection algorithm (ACMF). Most forgery technologies always bring in high-frequency aware cues, which make it easy to distinguish source authenticity but difficult to generalize to unseen artifact types. The masked frequency forgery representation module is designed to explore robust forgery cues by randomly discarding high-frequency information. In addition, we find that the forgery saliency map inconsistency through the detection network could affect the generalizability. Thus, the forgery attention consistency is introduced to force detectors to focus on similar attention regions for better generalization ability. Experiment results on several public face forgery datasets (FaceForensic++, DFD, Celeb-DF, WDF and DFDC datasets) demonstrate the superior performance of the proposed method compared with the state-of-the-art methods. The source code and models are publicly available at \u0000<uri>https://github.com/chenboluo/ACMF</uri>\u0000.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"504-515"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Information Leakage Measures for Imperfect Statistical Information: Application to Non-Bayesian Framework 不完全统计信息的信息泄漏度量:非贝叶斯框架的应用
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516585
Shahnewaz Karim Sakib;George T. Amariucai;Yong Guan
{"title":"Information Leakage Measures for Imperfect Statistical Information: Application to Non-Bayesian Framework","authors":"Shahnewaz Karim Sakib;George T. Amariucai;Yong Guan","doi":"10.1109/TIFS.2024.3516585","DOIUrl":"10.1109/TIFS.2024.3516585","url":null,"abstract":"This paper analyzes the problem of estimating information leakage when the complete statistics of the privacy mechanism are not known, and the only available information consists of several input-output pairs obtained through interaction with the system or through some side channel. Several metrics, such as subjective leakage, objective leakage, and confidence boost, were introduced before for this purpose, but by design only work in a Bayesian framework. However, it is known that Bayesian inference can quickly become intractable if the domains of the involved variables are large. In this paper, we focus on this exact problem and propose a novel approach to perform an estimation of the leakage measures when the true knowledge of the privacy mechanism is beyond the reach of the user for a non-Bayesian framework using machine learning. Initially, we adapt the definition of leakage metrics to a non-Bayesian framework and derive their statistical bounds, and afterward, we evaluate the performance of those metrics via various experiments using Neural Networks, Random Forest Classifiers, and Support Vector Machines. We have also evaluated their performance on an image dataset to demonstrate the versatility of the metrics. Finally, we provide a comparative analysis between our proposed metrics and the metrics of the Bayesian framework.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1065-1080"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decaf: Data Distribution Decompose Attack Against Federated Learning Decaf:针对联邦学习的数据分布分解攻击
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516545
Zhiyang Dai;Yansong Gao;Chunyi Zhou;Anmin Fu;Zhi Zhang;Minhui Xue;Yifeng Zheng;Yuqing Zhang
{"title":"Decaf: Data Distribution Decompose Attack Against Federated Learning","authors":"Zhiyang Dai;Yansong Gao;Chunyi Zhou;Anmin Fu;Zhi Zhang;Minhui Xue;Yifeng Zheng;Yuqing Zhang","doi":"10.1109/TIFS.2024.3516545","DOIUrl":"10.1109/TIFS.2024.3516545","url":null,"abstract":"In contrast to prevalent Federated Learning (FL) privacy inference techniques such as generative adversarial networks attacks, membership inference attacks, property inference attacks, and model inversion attacks, we devise an innovative privacy threat: the Data Distribution Decompose Attack on FL, termed \u0000<monospace>Decaf</monospace>\u0000. This attack enables an honest-but-curious FL server to meticulously profile the proportion of each class owned by the victim FL user, divulging sensitive information like local market item distribution and business competitiveness. The crux of \u0000<monospace>Decaf</monospace>\u0000 lies in the profound observation that the magnitude of local model gradient changes closely mirrors the underlying data distribution, including the proportion of each class. \u0000<monospace>Decaf</monospace>\u0000 addresses two crucial challenges: accurately identify the missing/null class(es) given by any victim user as a premise and then quantify the precise relationship between gradient changes and each remaining non-null class. Notably, \u0000<monospace>Decaf</monospace>\u0000 operates stealthily, rendering it entirely passive and undetectable to victim users regarding the infringement of their data distribution privacy. Experimental validation on five benchmark datasets (MNIST, FASHION-MNIST, CIFAR-10, FER-2013, and SkinCancer) employing diverse model architectures, including customized convolutional networks, standardized VGG16, and ResNet18, demonstrates \u0000<monospace>Decaf</monospace>\u0000’s efficacy. Results indicate its ability to accurately decompose local user data distribution, regardless of whether it is IID or non-IID distributed. Specifically, the dissimilarity measured using \u0000<inline-formula> <tex-math>$L_{infty }$ </tex-math></inline-formula>\u0000 distance between the distribution decomposed by \u0000<monospace>Decaf</monospace>\u0000 and ground truth is consistently below 5% when no null classes exist. Moreover, \u0000<monospace>Decaf</monospace>\u0000 achieves 100% accuracy in determining any victim user’s null classes, validated through formal proof.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"405-420"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dualistic Disentangled Meta-Learning Model for Generalizable Person Re-Identification 广义人物再识别的二元解纠缠元学习模型
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516540
Jia Sun;Yanfeng Li;Luyifu Chen;Houjin Chen;Minjun Wang
{"title":"Dualistic Disentangled Meta-Learning Model for Generalizable Person Re-Identification","authors":"Jia Sun;Yanfeng Li;Luyifu Chen;Houjin Chen;Minjun Wang","doi":"10.1109/TIFS.2024.3516540","DOIUrl":"10.1109/TIFS.2024.3516540","url":null,"abstract":"Person re-identification (re-ID) is a research hotspot in the field of intelligent monitoring and security. Domain generalizable (DG) person re-identification transfers the trained model directly to the unseen target domain for testing, which is closer to the practical application than supervised or unsupervised person re-ID. Meta-learning strategy is an effective way to solve the DG problem, nevertheless, existing meta-learning-based DG re-ID methods mainly simulates the test process in a single aspect such as identity or style, while ignoring the completely different person identities and styles in the unseen target domain. As to this problem, we consider a double disentangling from two levels of training strategy and feature learning, and propose a novel dualistic disentangled meta-learning (D<inline-formula> <tex-math>$^{mathbf {2}}$ </tex-math></inline-formula>ML) model. D<inline-formula> <tex-math>$^{mathbf {2}}$ </tex-math></inline-formula>ML is composed of two disentangling stages, one is for learning strategy, which spreads one-stage meta-test into two-stage, including an identity meta-test stage and a style meta-test stage. The other is for feature representation, which decouples the shallow layer features into identity-related features and style-related features. Specifically, we first conduct identity meta-test stage on different person identities of the images, and then employ a feature-level style perturbation module (SPM) based on Fourier spectrum transformation to conduct the style meta-test stage on the image with diversified styles. With these two stages, abundant changes in the unseen domain can be simulated during the meta-test phase. Besides, to learn more identity-related features, a feature disentangling module (FDM) is inserted at each stage of meta-learning and a disentangled triplet loss is developed. Through constraining the relationship between identity-related features and style-related features, the generalization ability of the model can be further improved. Experimental results on four public datasets show that our D<inline-formula> <tex-math>$^{mathbf {2}}$ </tex-math></inline-formula>ML model achieves superior generalization performance compared to the state-of-the-art methods.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1106-1118"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MalFSCIL: A Few-Shot Class-Incremental Learning Approach for Malware Detection MalFSCIL:用于恶意软件检测的少量类增量学习方法
IF 6.8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/tifs.2024.3516565
Yuhan Chai, Ximing Chen, Jing Qiu, Lei Du, Yanjun Xiao, Qiying Feng, Shouling Ji, Zhihong Tian
{"title":"MalFSCIL: A Few-Shot Class-Incremental Learning Approach for Malware Detection","authors":"Yuhan Chai, Ximing Chen, Jing Qiu, Lei Du, Yanjun Xiao, Qiying Feng, Shouling Ji, Zhihong Tian","doi":"10.1109/tifs.2024.3516565","DOIUrl":"https://doi.org/10.1109/tifs.2024.3516565","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"234 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online Two-Stage Channel-Based Lightweight Authentication Method for Time-Varying Scenarios 时变场景下基于在线两阶段通道的轻量级认证方法
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516575
Yuhong Xue;Zhutian Yang;Zhilu Wu;Hu Wang;Guan Gui
{"title":"Online Two-Stage Channel-Based Lightweight Authentication Method for Time-Varying Scenarios","authors":"Yuhong Xue;Zhutian Yang;Zhilu Wu;Hu Wang;Guan Gui","doi":"10.1109/TIFS.2024.3516575","DOIUrl":"10.1109/TIFS.2024.3516575","url":null,"abstract":"Physical Layer Authentication (PLA) emerges as a promising security solution, offering efficient identity verification for the Internet of Things (IoT). The advent of 5G/6G technologies has ushered in an era of extensive device connectivity, diverse networks, and complex application scenarios within IoT ecosystems. These advancements necessitate PLA systems that are highly secure, robust, capable of online processing, and adaptable to unknown channel conditions. In this paper, we introduce a novel two-stage PLA framework that synergizes channel prediction with power-delay attributes, ensuring superior performance in mobile and time-varying channel environments. Specifically, our approach employs Sparse Variational Gaussian Processes (SVGP) to accurately model and track real-time channel variations, leveraging historical data for online predictions without incurring significant computational or storage overhead. The second stage of our framework enhances the robustness of the authentication process by incorporating power-delay features, which are inherently resistant to temporal fluctuations, thereby eliminating the need for additional feature extraction in noisy settings. Moreover, our authentication scheme is designed to be distribution-agnostic, utilizing Kernel Density Estimation (KDE) for non-parametric threshold determination in hypothesis testing. Theoretical analysis underpins the generalization capabilities of our proposed method. Simulation results in mobile scenarios reveal that our two-stage PLA framework reduces complexity and significantly improves identity authentication performance, particularly in scenarios with low signal-to-noise ratios.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"781-795"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Joint Identity Verification and Pose Alignment for Partial Fingerprints 部分指纹的联合身份验证和姿态对齐
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516566
Xiongjun Guan;Zhiyu Pan;Jianjiang Feng;Jie Zhou
{"title":"Joint Identity Verification and Pose Alignment for Partial Fingerprints","authors":"Xiongjun Guan;Zhiyu Pan;Jianjiang Feng;Jie Zhou","doi":"10.1109/TIFS.2024.3516566","DOIUrl":"10.1109/TIFS.2024.3516566","url":null,"abstract":"Currently, portable electronic devices are becoming more and more popular. For lightweight considerations, their fingerprint recognition modules usually use limited-size sensors. However, partial fingerprints have few matchable features, especially when there are differences in finger pressing posture or image quality, which makes partial fingerprint verification challenging. Most existing methods regard fingerprint position rectification and identity verification as independent tasks, ignoring the coupling relationship between them—relative pose estimation typically relies on paired features as anchors, and authentication accuracy tends to improve with more precise pose alignment. In this paper, we propose a novel framework for joint identity verification and pose alignment of partial fingerprint pairs, aiming to leverage their inherent correlation to improve each other. To achieve this, we present a multi-task CNN (Convolutional Neural Network)-Transformer hybrid network, and design a pre-training task to enhance the feature extraction capability. Experiments on multiple public datasets (NIST SD14, FVC2002 DB1_A & DB3_A, FVC2004 DB1_A & DB2_A, FVC2006 DB1_A) and an in-house dataset demonstrate that our method achieves state-of-the-art performance in both partial fingerprint verification and relative pose estimation, while being more efficient than previous methods. Code is available at: \u0000<uri>https://github.com/XiongjunGuan/JIPNet</uri>\u0000.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"249-263"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization 通过分布式优化的分散联邦学习的可证明的隐私优势
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2024-12-12 DOI: 10.1109/TIFS.2024.3516564
Wenrui Yu;Qiongxiu Li;Milan Lopuhaä-Zwakenberg;Mads Græsbøll Christensen;Richard Heusdens
{"title":"Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization","authors":"Wenrui Yu;Qiongxiu Li;Milan Lopuhaä-Zwakenberg;Mads Græsbøll Christensen;Richard Heusdens","doi":"10.1109/TIFS.2024.3516564","DOIUrl":"10.1109/TIFS.2024.3516564","url":null,"abstract":"Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source, thus embedding privacy as a core consideration in FL architectures, whether centralized or decentralized. Contrasting with recent findings by Pasquini et al., which suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models, our study provides compelling evidence to the contrary. We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection - both theoretically and empirically - compared to centralized approaches. The challenge of quantifying privacy loss through iterative processes has traditionally constrained the theoretical exploration of FL protocols. We overcome this by conducting a pioneering in-depth information-theoretical privacy analysis for both frameworks. Our analysis, considering both eavesdropping and passive adversary models, successfully establishes bounds on privacy leakage. In particular, we show information theoretically that the privacy loss in decentralized FL is upper bounded by the loss in centralized FL. Compared to the centralized case where local gradients of individual participants are directly revealed, a key distinction of optimization-based decentralized FL is that the relevant information includes differences of local gradients over successive iterations and the aggregated sum of different nodes’ gradients over the network. This information complicates the adversary’s attempt to infer private data. To bridge our theoretical insights with practical applications, we present detailed case studies involving logistic regression and deep neural networks. These examples demonstrate that while privacy leakage remains comparable in simpler models, complex models like deep neural networks exhibit lower privacy risks under decentralized FL. Extensive numerical tests further validate that decentralized FL is more resistant to privacy attacks, aligning with our theoretical findings.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"822-838"},"PeriodicalIF":6.3,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信