{"title":"Private Sample Alignment for Vertical Federated Learning: An Efficient and Reliable Realization","authors":"Yuxin Xi;Yu Guo;Shiyuan Xu;Chengjun Cai;Xiaohua Jia","doi":"10.1109/TIFS.2025.3555794","DOIUrl":"10.1109/TIFS.2025.3555794","url":null,"abstract":"Sample alignment is recognized as a vital component of vertical federated learning, which facilitates the integration of differential samples and high-quality model training. In this trend, providing Private Sample Alignment (PSA) among multi-clients becomes naturally necessary for preventing unauthorized sample access and client privacy exposure. However, exiting PSA protocols mainly focus on two-party scenarios and cannot be directly adapted to the multi-client delegated computing scenarios required for vertical federated learning. Besides, these studies fail to address the need for protocol robustness in practical federated Learning network environments. Therefore, we aim to design an efficient and reliable PSA protocol in multi-client vertical federated learning. In this work, we present the first practical PSA protocol for vertical federated learning, allowing multi-clients to efficiently identify common samples without revealing additional information. Toward this direction, our PSA protocol first explores the Learning With Errors (LWE) problem to create a lightweight delegated Private Set Intersection (PSI) scheme, enabling efficient sample intersection among multiple clients. To achieve the reliability of the PSA protocol, we devise a multi-client vector aggregation algorithm that securely delegates the server to calculate the sample intersection. Building on this foundation, we develop an efficient Threshold-based Private Sample Alignment (T-PSA) protocol that allows multiple clients to determine the intersection of their input samples only if the intersection size surpasses a specific threshold. We implement a prototype and conduct a thorough security analysis. Comprehensive evaluation results confirm the efficiency and practicality of our design.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3834-3848"},"PeriodicalIF":6.3,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingdan Kang;Haoxin Yang;Yan Cai;Huaidong Zhang;Xuemiao Xu;Yong Du;Shengfeng He
{"title":"SITA: Structurally Imperceptible and Transferable Adversarial Attacks for Stylized Image Generation","authors":"Jingdan Kang;Haoxin Yang;Yan Cai;Huaidong Zhang;Xuemiao Xu;Yong Du;Shengfeng He","doi":"10.1109/TIFS.2025.3555552","DOIUrl":"10.1109/TIFS.2025.3555552","url":null,"abstract":"Image generation technology has brought significant advancements across various fields but has also raised concerns about data misuse and potential rights infringements, particularly with respect to creating visual artworks. Current methods aimed at safeguarding artworks often employ adversarial attacks. However, these methods face challenges such as poor transferability, high computational costs, and the introduction of noticeable noise, which compromises the aesthetic quality of the original artwork. To address these limitations, we propose a Structurally Imperceptible and Transferable Adversarial (SITA) attacks. SITA leverages a CLIP-based destylization loss, which decouples and disrupts the robust style representation of the image. This disruption hinders style extraction during stylized image generation, thereby impairing the overall stylization process. Importantly, SITA eliminates the need for a surrogate diffusion model, leading to significantly reduced computational overhead. The method’s robust style feature disruption ensures high transferability across diverse models. Moreover, SITA introduces perturbations by embedding noise within the imperceptible structural details of the image. This approach effectively protects against style extraction without compromising the visual quality of the artwork. Extensive experiments demonstrate that SITA offers superior protection for artworks against unauthorized use in stylized generation. It significantly outperforms existing methods in terms of transferability, computational efficiency, and noise imperceptibility. Code is available at <uri>https://github.com/A-raniy-day/SITA</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3936-3949"},"PeriodicalIF":6.3,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compromising LLM Driven Embodied Agents With Contextual Backdoor Attacks","authors":"Aishan Liu;Yuguang Zhou;Xianglong Liu;Tianyuan Zhang;Siyuan Liang;Jiakai Wang;Yanjun Pu;Tianlin Li;Junqi Zhang;Wenbo Zhou;Qing Guo;Dacheng Tao","doi":"10.1109/TIFS.2025.3555410","DOIUrl":"10.1109/TIFS.2025.3555410","url":null,"abstract":"Large language models (LLMs) have transformed the development of embodied intelligence. By providing a few contextual demonstrations (such as rationales and solution examples) developers can utilize the extensive internal knowledge of LLMs to effortlessly translate complex tasks described in abstract language into sequences of code snippets, which will serve as the execution logic for embodied agents. However, this paper uncovers a significant backdoor security threat within this process and introduces a novel method called Contextual Backdoor Attack. By poisoning just a few contextual demonstrations, attackers can covertly compromise the contextual environment of a closed-box LLM, prompting it to generate programs with context-dependent defects. These programs appear logically sound but contain defects that can activate and induce unintended behaviors when the operational agent encounters specific triggers in its interactive environment. To compromise the LLM’s contextual environment, we employ adversarial in-context generation to optimize poisoned demonstrations, where an LLM judge evaluates these poisoned prompts, reporting to an additional LLM that iteratively optimizes the demonstration in a two-player adversarial game using chain-of-thought reasoning. To enable context-dependent behaviors in downstream agents, we implement a dual-modality activation strategy that controls both the generation and execution of program defects through textual and visual triggers. We expand the scope of our attack by developing five program defect modes that compromise key aspects of confidentiality, integrity, and availability in embodied agents. To validate the effectiveness of our approach, we conducted extensive experiments across various tasks, including robot planning, robot manipulation, and compositional visual reasoning. Additionally, we demonstrate the potential impact of our approach by successfully attacking real-world autonomous driving systems. The contextual backdoor threat introduced in this study poses serious risks for millions of downstream embodied agents, given that most publicly available LLMs are third-party-provided. This paper aims to raise awareness of this critical threat. Our code and demos are available at <uri>https://contextual-backdoor.github.io/</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3979-3994"},"PeriodicalIF":6.3,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shihua Sun;Shridatt Sugrim;Angelos Stavrou;Haining Wang
{"title":"Partner in Crime: Boosting Targeted Poisoning Attacks Against Federated Learning","authors":"Shihua Sun;Shridatt Sugrim;Angelos Stavrou;Haining Wang","doi":"10.1109/TIFS.2025.3555429","DOIUrl":"10.1109/TIFS.2025.3555429","url":null,"abstract":"Federated Learning (FL) exposes vulnerabilities to targeted poisoning attacks that aim to cause misclassification specifically from the source class to the target class. However, using well-established defense frameworks, the poisoning impact of these attacks can be greatly mitigated. We introduce a generalized pre-training stage approach to Boost Targeted Poisoning Attacks against FL, called BoTPA. Its design rationale is to leverage the model update contributions of all data points, including ones outside of the source and target classes, to construct an Amplifier set, in which we falsify the data labels before the FL training process, as a means to boost attacks. We comprehensively evaluate the effectiveness and compatibility of BoTPA on various targeted poisoning attacks. Under data poisoning attacks, our evaluations reveal that BoTPA can achieve a median Relative Increase in Attack Success Rate (RI-ASR) between 15.3% and 36.9% across all possible source-target class combinations, with varying percentages of malicious clients, compared to its baseline. In the context of model poisoning, BoTPA attains RI-ASRs ranging from 13.3% to 94.7% in the presence of the Krum and Multi-Krum defenses, from 2.6% to 49.2% under the Median defense, and from 2.9% to 63.5% under the Flame defense.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4152-4166"},"PeriodicalIF":6.3,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"SLAPP: Poisoning Prevention in Federated Learning and Differential Privacy via Stateful Proofs of Execution","authors":"Norrathep Rattanavipanon;Ivan De Oliveira Nunes","doi":"10.1109/TIFS.2025.3555179","DOIUrl":"10.1109/TIFS.2025.3555179","url":null,"abstract":"The rise of IoT-driven distributed data analytics, coupled with increasing privacy concerns, has led to a demand for effective privacy-preserving and federated data collection/model training mechanisms. In response, approaches such as Federated Learning (FL) and Local Differential Privacy (LDP) have been proposed and attracted much attention over the past few years. However, they still share the common limitation of being vulnerable to poisoning attacks wherein adversaries compromising edge devices feed forged (a.k.a. “poisoned”) data to aggregation back-ends, undermining the integrity of FL/LDP results. In this work, we propose a system-level approach to remedy this issue based on a novel security notion of Proofs of Stateful Execution (<inline-formula> <tex-math>$mathsf {PoSX}$ </tex-math></inline-formula>) for IoT/embedded devices’ software. To realize the <inline-formula> <tex-math>$mathsf {PoSX}$ </tex-math></inline-formula> concept, we design <inline-formula> <tex-math>$mathsf {SLAPP}$ </tex-math></inline-formula>: a System-Level Approach for Poisoning Prevention. <inline-formula> <tex-math>$mathsf {SLAPP}$ </tex-math></inline-formula> leverages commodity security features of embedded devices – in particular ARM TrustZone-M security extensions – to verifiably bind raw sensed data to their correct usage as part of FL/LDP edge device routines. As a consequence, it offers robust security guarantees against poisoning. Our evaluation, based on real-world prototypes featuring multiple cryptographic primitives and data collection schemes, showcases <inline-formula> <tex-math>$mathsf {SLAPP}$ </tex-math></inline-formula>’s security and low overhead.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4167-4182"},"PeriodicalIF":6.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143713181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jianfei Sun;Guowen Xu;Yang Yang;Xuehuan Yang;Xiaoguo Li;Cong Wu;Zhen Liu;Guomin Yang;Robert H. Deng
{"title":"Forward-Secure Hierarchical Delegable Signature for Smart Homes","authors":"Jianfei Sun;Guowen Xu;Yang Yang;Xuehuan Yang;Xiaoguo Li;Cong Wu;Zhen Liu;Guomin Yang;Robert H. Deng","doi":"10.1109/TIFS.2025.3555185","DOIUrl":"10.1109/TIFS.2025.3555185","url":null,"abstract":"Aiming to provide people with great convenience and comfort, smart home systems have been deployed in thousands of homes. In this paper, we focus on handling the security and privacy issues in such a promising system by customizing a new cryptographic primitive to provide the following security guarantees: 1) fine-grained, privacy-preserving authorization for smart home users and integrity protection of communication contents; 2) flexible self-sovereign permission delegation; 3) forward security of previous messages. To our knowledge, no previous system has been designed to consider these three security and privacy requirements simultaneously. To tackle these challenges, we put forward the first-ever efficient cryptographic primitive called the Forward-secure Hierarchical Delegable Signature (FS-HDS) scheme for smart homes. Specifically, we first propose a new primitive, efficient Hierarchical Delegable Signature (HDS) scheme, which is capable of supporting partial delegation capability while realizing privacy-preserving authorization and integrity guarantee. Then, we present an FS-HDS for smart homes with the efficient HDS as the underlying building block, which not only inherits all the desirable features of HDS but also ensures that the past content integrity is not affected even if the current secret key is compromised. We provide comprehensively strict security proofs to prove the security of our proposed solutions. Its performance is also validated via experimental simulations to showcase its practicability and effectiveness.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3950-3965"},"PeriodicalIF":6.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143713234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wentao Dong;Lei Xu;Leqian Zheng;Huayi Duan;Cong Wang;Qian Wang
{"title":"Do Not Skip Over the Offline: Verifiable Silent Preprocessing From Small Security Hardware","authors":"Wentao Dong;Lei Xu;Leqian Zheng;Huayi Duan;Cong Wang;Qian Wang","doi":"10.1109/TIFS.2025.3554329","DOIUrl":"10.1109/TIFS.2025.3554329","url":null,"abstract":"Multi-party computation (MPC) has gained increasing attention in both research and industry, with many protocols adopting the preprocessing model to optimize online performance through the strategic use of offline-generated, data-independent correlated randomness (or correlation). However, while extensive research has been dedicated to enhancing the online phase, the equally critical offline phase remains largely overlooked. This gap imposes significant yet unaddressed challenges in both security and efficiency, hindering the practical adoption of MPC systems. To address these challenges, we build upon the pseudorandom correlation generator (PCG) concept by Boyle et al. (CRYPTO’19, FOCS’20) and propose HPCG, a programmable, verifiable, and concretely efficient PCG construction using small security hardware. Our core technique, termed verifiable silent preprocessing, enables virtually unbounded, on-demand generation of diverse correlated randomness with provable correctness while effectively reducing offline overhead in a correlation-agnostic manner. To demonstrate the benefits of our approach, we experimentally evaluate HPCG and compare it with other preprocessing techniques. We also show how HPCG can further optimize specialized secure computation tasks (e.g., shuffling and equality test) by promoting new, customized correlations, which may be of new interest.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4860-4873"},"PeriodicalIF":6.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143713182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cyber Recovery From Dynamic Load Altering Attacks: Linking Electricity, Transportation, and Cyber Networks","authors":"Mengxiang Liu;Zhongda Chu;Fei Teng","doi":"10.1109/TIFS.2025.3553079","DOIUrl":"10.1109/TIFS.2025.3553079","url":null,"abstract":"The dynamic load alternating attack (DLAA) that manipulates the load demands in power grid by compromising internet of things (IoT) home appliances has posed significant threats to the grid’s stable and safe operation. Current effort is mainly devoted to the investigation of detecting and mitigating DLAAs, while, for a holistic cyber-resiliency-enhancement process, the last but not least cyber recovery from DLAAs (CRDA) has not been paid enough attention yet. Considering the interconnection among electricity, transportation, and cyber networks, this paper presents the first exploration of the CRDA, where two essential sub-tasks are formulated: i) Optimal design of repair crew routes to remove installed malware and ii) Robust adjustment of system operation to eliminate the mitigation costs with stability guarantee. Towards this end, linear stability constraints are established by utilising a sensitivity-based eigenvalue estimation method, where the eigenvalue sensitivity information is appropriately ordered and strategically selected to guarantee the estimation accuracy. Moreover, to assure the CRDA solution’s robustness to the adversary’s follow-up movement, the worst-case attack strategies in all attack scenarios during the recovery process are integrated. A mixed-integer linear programming (MILP) problem is subsequently developed for the CRDA with the primary objective to restore the secure but cost-inefficient mitigation operation mode to the cost-efficient one and secondarily to repair compromised IoT home appliances. Case studies are performed in IEEE power system cases to validate the eigenvalue estimation’s accuracy, the CRDA solution’s effectiveness and robustness, as well as the proposed CRDA’s extensibility.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3862-3876"},"PeriodicalIF":6.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143713183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wenyi Xue;Yang Yang;Minming Huang;Yingjiu Li;Hwee Hwa Pang;Robert H. Deng
{"title":"DkvSSO: Delegatable Keyed-Verification Credentials for Efficient Anonymous Single Sign-On","authors":"Wenyi Xue;Yang Yang;Minming Huang;Yingjiu Li;Hwee Hwa Pang;Robert H. Deng","doi":"10.1109/TIFS.2025.3555196","DOIUrl":"10.1109/TIFS.2025.3555196","url":null,"abstract":"Anonymous single sign-on (ASSO) is an anonymous multi-service authentication method for end users. However, existing ASSO schemes suffer from heavy ticket requesting and verifying overheads, limiting their applications in large-scale settings. To address this problem, we propose a novel concept called keyed-verification anonymous credentials with disposable delegation (KVAC-DD) in the multi-verifier setting. Next, we extend KVAC-DD to build an efficient ASSO system, dubbed DkvSSO. The construction of DkvSSO can be instantiated in efficient prime-order groups, avoiding costly operations required in previous ASSO systems. We formally prove the security of our proposed constructions. Extensive experiments show that DkvSSO is significantly more efficient than existing ASSO schemes, making it suitable to be deployed in large-scale settings.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"4196-4211"},"PeriodicalIF":6.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143713180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng
{"title":"Enhanced Model Poisoning Attack and Multi-Strategy Defense in Federated Learning","authors":"Li Yang;Yinbin Miao;Ziteng Liu;Zhiquan Liu;Xinghua Li;Da Kuang;Hongwei Li;Robert H. Deng","doi":"10.1109/TIFS.2025.3555193","DOIUrl":"10.1109/TIFS.2025.3555193","url":null,"abstract":"As a new paradigm of distributed learning, Federated Learning (FL) has been applied in industrial fields, such as intelligent retail, finance and autonomous driving. However, several schemes that aim to attack robust aggregation rules and reducing the model accuracy have been proposed recently. These schemes do not maintain the sign statistics of gradients unchanged during attacks. Therefore, the sign statistics-based scheme SignGuard can resist most existing attacks. To defeat SignGuard and most existing cosine or distance-based aggregation schemes, we propose an enhanced model poisoning attack, ScaleSign. Specifically, ScaleSign uses a scaling attack and a sign modification component to obtain malicious gradients with higher cosine similarity and modify the sign statistics of malicious gradients, respectively. In addition, these two components have the least impact on the magnitudes of gradients. Then, we propose MSGuard, a Multi-Strategy Byzantine-robust scheme based on cosine mechanisms, symbol statistics, and spectral methods. Formal analysis proves that malicious gradients generated by ScaleSign have a closer cosine similarity than honest gradients. Extensive experiments demonstrate that ScaleSign can attack most of the existing Byzantine-robust rules, especially achieving a success rate of up to 98.23% for attacks on SignGuard. MSGuard can defend against most existing attacks including ScaleSign. Specifically, in the face of ScaleSign attack, the accuracy of MSGuard improves by up to 41.78% compared to SignGuard.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3877-3892"},"PeriodicalIF":6.3,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143713184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}