{"title":"I2OL-Net: Intra-Inter Objectness Learning Network for Point-Supervised X-Ray Prohibited Item Detection","authors":"Chenyang Wang;Yan Yan;Jing-Hao Xue;Hanzi Wang","doi":"10.1109/TIFS.2025.3550052","DOIUrl":"10.1109/TIFS.2025.3550052","url":null,"abstract":"Automatic detection of prohibited items in X-ray images plays a crucial role in public security. However, existing methods rely heavily on labor-intensive box annotations. To address this, we investigate X-ray prohibited item detection under labor-efficient point supervision and develop an intra-inter objectness learning network (I2OL-Net). I2OL-Net consists of two key modules: an intra-modality objectness learning (intra-OL) module and an inter-modality objectness learning (inter-OL) module. The intra-OL module designs a local focus Gaussian masking block and a global random Gaussian masking block to collaboratively learn the objectness in X-ray images. Meanwhile, the inter-OL module introduces the wavelet decomposition-based adversarial learning block and the objectness block, effectively reducing the modality discrepancy between natural images and X-ray images and transferring the objectness knowledge learned from natural images with box annotations to X-ray images. Based on the above, I2OL-Net greatly alleviates the severe problem of part domination caused by large intra-class variations in X-ray images. Experimental results on four X-ray datasets show that I2OL-Net can achieve superior performance with a significant reduction of annotation cost, thus enhancing its accessibility and practicality. The source code is released at <uri>https://github.com/houjoeng/I2OL-Net</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3045-3059"},"PeriodicalIF":6.3,"publicationDate":"2025-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143618282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangyun Tang;Luyao Peng;Yu Weng;Meng Shen;Liehuang Zhu;Robert H. Deng
{"title":"Enforcing Differential Privacy in Federated Learning via Long-Term Contribution Incentives","authors":"Xiangyun Tang;Luyao Peng;Yu Weng;Meng Shen;Liehuang Zhu;Robert H. Deng","doi":"10.1109/TIFS.2025.3550777","DOIUrl":"10.1109/TIFS.2025.3550777","url":null,"abstract":"Privacy-preserving Federated Learning (FL) based on Differential Privacy (DP) protects clients’ data by adding DP noise to samples’ gradients and has emerged as a de facto standard for data privacy in FL. However, the accuracy of global models in DP-based FL may be reduced significantly when rogue clients occur who deviate from the preset DP-based FL approaches and selfishly inject excessive DP noise beyond expectations, thereby applying a smaller privacy budget in the DP mechanism to ensure a higher level of security. Existing DP-based FL fails to prevent such attacks as they are imperceptible. Under the DP-based FL system and random Gaussian noise, the local model parameters of the rogue clients and the honest clients have identical distributions. In particular, the rogue local models show a low performance, but directly filtering out lower-performance local models compromises the generalizability of global models, as local models trained on scarce data also behave with low performance in the early epoch. In this paper, we propose ReFL, a novel privacy-preserving FL system that enforces DP and avoids the accuracy reduction of global models caused by excessive DP noise of rogue clients. Based on the observation that rogue local models with excessive DP noise and honest local models trained on scarce data have different performance patterns in long-term training epochs, we propose a long-term contribution incentives scheme to evaluate clients’ reputations and identify rogue clients. Furthermore, we design a reputation-based aggregation to avoid the damage of rogue clients’ models on the global model accuracy, based on the incentive reputation. Extensive experiments demonstrate ReFL guarantees the global model accuracy performance 0.77% - 81.71% higher than existing DP-based FL methods in the presence of rogue clients.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3102-3115"},"PeriodicalIF":6.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143607777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MD-SONIC: Maliciously-Secure Outsourcing Neural Network Inference With Reduced Online Communication","authors":"Yansong Zhang;Xiaojun Chen;Ye Dong;Qinghui Zhang;Rui Hou;Qiang Liu;Xudong Chen","doi":"10.1109/TIFS.2025.3550834","DOIUrl":"10.1109/TIFS.2025.3550834","url":null,"abstract":"With the widespread deployment of Deep-Learning-as-a-Service, secure multi-party computation-based outsourcing neural network (NN) inference has garnered significant attention for its high-security guarantee. Nevertheless, under the dishonest-majority setting with malicious adversaries, prior secure inference works are still costly in terms of communication and run-time. Additionally, existing outsourcing frameworks impose a substantial client-side design, which leads to obstacles in resource-constrained devices. To address the above challenges, we propose MD-SONIC, an online efficient and maliciously-secure framework for outsourcing NN inference with a dishonest majority. We first construct communication-efficient n-party protocols for the basic primitives such as fixed-point multiplication and most significant bit extraction by combining mask-sharing and TinyOT-sharing with SPD<inline-formula> <tex-math>$mathbb {Z}_{2^{k}}$ </tex-math></inline-formula> seamlessly. Then, we build fast secure blocks for the widely used NN operators, including matrix multiplication, ReLU, and Maxpool, on top of our basic primitives. To enable an arbitrary number of users to outsource the secure inference task to n computing servers, we propose a lightweight-client and fast <inline-formula> <tex-math>$Sigma $ </tex-math></inline-formula> paradigm named SPIN, stemming from zero-knowledge proofs. Our SPIN can be instantiated into a set of efficient outsourcing protocols over multiple algebraic structures (e.g., finite field and ring). We also conduct extensive evaluations of MD-SONIC on various neural networks. Compared to the work by Damgård et al. (IEEE S&P’19) and MD-ML (USENIX Security’24), we achieve up to <inline-formula> <tex-math>$594.4times $ </tex-math></inline-formula> and <inline-formula> <tex-math>$45.1times $ </tex-math></inline-formula> online communication improvements, and improve the online execution time by at most <inline-formula> <tex-math>$14.3times $ </tex-math></inline-formula> (resp. <inline-formula> <tex-math>$20.5times $ </tex-math></inline-formula>) and <inline-formula> <tex-math>$1.8times $ </tex-math></inline-formula> (resp. <inline-formula> <tex-math>$2.3times $ </tex-math></inline-formula>) in LAN (resp. WAN).","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3534-3549"},"PeriodicalIF":6.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward Secure Weighted Aggregation for Privacy-Preserving Federated Learning","authors":"Yunlong He;Jia Yu","doi":"10.1109/TIFS.2025.3550787","DOIUrl":"10.1109/TIFS.2025.3550787","url":null,"abstract":"Privacy-preserving federated learning can protect the privacy of model gradients/parameters in the model aggregation phase. Most existing schemes only consider the scenario where user models have the same weight in model aggregation. However, users often hold different numbers of training samples in practice. This makes the model convergence speed of existing schemes very slow. To solve this problem, we propose a privacy-preserving federated learning scheme with secure weighted aggregation. It is able to allocate appropriate user weights based on the user’s local data size with privacy protection. In addition, it is impossible for the cloud server to obtain the user’s original model parameters and local data size in the proposed scheme. Specifically, we use Lagrange interpolation to combine the model parameters and local data size into a set of ciphertexts. The cloud server can smoothly perform weighted aggregation based on these ciphertexts. Leveraging the Chinese Remainder Theorem, we convert the local data size into a series of verification values. This enables the user to verify the correctness of results returned from the server. We provide a theoretical analysis for the proposed scheme, demonstrating its effectiveness, privacy, and verifiability. We perform extensive experiments on the MNIST dataset. Experimental results demonstrate its model performance, computation overhead, and communication overhead.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3475-3488"},"PeriodicalIF":6.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiqi Dai;Yang Zhou;Xiaohai Dai;Kim-Kwang Raymond Choo;Xia Xie;Deqing Zou;Hai Jin
{"title":"CR-DAP: A Comprehensive and Regulatory Decentralized Anonymous Payment System","authors":"Weiqi Dai;Yang Zhou;Xiaohai Dai;Kim-Kwang Raymond Choo;Xia Xie;Deqing Zou;Hai Jin","doi":"10.1109/TIFS.2025.3550821","DOIUrl":"10.1109/TIFS.2025.3550821","url":null,"abstract":"Among various blockchain applications, decentralized anonymous payment (DAP) systems stand out for their enhanced privacy protection compared to traditional payment methods. However, DAPs face challenges such as the lack of asset recovery and identity verification features. To ensure the long-term healthy development of DAP systems, adherence to legal regulations and privacy protection is equally critical. In response to these requirements, we propose a <inline-formula> <tex-math>$textsf {CR}$ </tex-math></inline-formula>-<inline-formula> <tex-math>$textsf {DAP}$ </tex-math></inline-formula> system that offers a secure and efficient solution without compromising on practicality. Our innovation lies in introducing an identity-based traceable anonymous signature scheme, which skillfully balances anonymity with traceability. This scheme supports private key retrieval and allows for identity tracking when necessary, addressing key pain points in existing anonymous payment systems and enhancing user trust. We have implemented the prototype of this signature scheme and the <inline-formula> <tex-math>$textsf {CR}$ </tex-math></inline-formula>-<inline-formula> <tex-math>$textsf {DAP}$ </tex-math></inline-formula> system, evaluating its performance to demonstrate its practicality.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3274-3286"},"PeriodicalIF":6.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiao-Yu Yue;Jiang-Wen Xiao;Xiao-Kang Liu;Yan-Wu Wang
{"title":"Differentially Private Linearized ADMM Algorithm for Decentralized Nonconvex Optimization","authors":"Xiao-Yu Yue;Jiang-Wen Xiao;Xiao-Kang Liu;Yan-Wu Wang","doi":"10.1109/TIFS.2025.3550808","DOIUrl":"10.1109/TIFS.2025.3550808","url":null,"abstract":"Privacy preservation is a challenging problem in decentralized nonconvex optimization containing sensitive data. Prior approaches to decentralized nonconvex optimization are either not strong enough to protect privacy or exhibit low utility under a high privacy guarantee. To address these issues, we propose a differentially private linearized alternating direction method of multipliers (DP-LADMM), which achieves fast convergence property for nonconvex objective functions while achieving saddle/maximum avoidance under differential privacy guarantee. We also apply the Analytic Gaussian Mechanism to track the cumulative privacy loss and provide a tight global differential privacy guarantee for DP-LADMM. The theoretical analysis offers an explicit convergence rate for our algorithm. To the best of our knowledge, this is the first paper to provide explicit convergence for decentralized nonconvex optimization with differential privacy and saddle/maximum avoidance. Numerical simulations and comparison studies on decentralized estimation confirm the superiority of the algorithm and the effectiveness of global privacy preservation.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3316-3329"},"PeriodicalIF":6.3,"publicationDate":"2025-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143608008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Stable and Efficient Data-Free Model Attack With Label-Noise Data Generation","authors":"Zhixuan Zhang;Xingjian Zheng;Linbo Qing;Qi Liu;Pingyu Wang;Yu Liu;Jiyang Liao","doi":"10.1109/TIFS.2025.3550066","DOIUrl":"10.1109/TIFS.2025.3550066","url":null,"abstract":"The objective of a data-free closed-box adversarial attack is to attack a victim model without using internal information, training datasets or semantically similar substitute datasets. Concerned about stricter attack scenarios, recent studies have tried employing generative networks to synthesize data for training substitute models. Nevertheless, these approaches concurrently encounter challenges associated with unstable training and diminished attack efficiency. In this paper, we propose a novel query-efficient data-free closed-box adversarial attack method. To mitigate unstable training, for the first time, we directly manipulate the intermediate-layer feature of a generator without relying on any substitute models. Specifically, a label noise-based generation module is created to enhance the intra-class patterns by incorporating partial historical information during the learning process. Additionally, we present a feature-disturbed diversity generation method to augment the inter-class distance. Meanwhile, we propose an adaptive intra-class attack strategy to heighten attack capability within a limited query budget. In this strategy, entropy-based distance is utilized to characterize the relative information from model outputs, while positive classes and negative samples are used to enhance low attack efficiency. The comprehensive experiments conducted on six datasets demonstrate the superior performance of our method compared to six state-of-the-art data-free closed-box competitors in both label-only and probability-only attack scenarios. Intriguingly, our method can realize the highest attack success rate on the online Microsoft Azure model under an extremely low query budget. Additionally, the proposed approach not only achieves more stable training but also significantly reduces the query count for a more balanced data generation. Furthermore, our method can maintain the best performance under the existing defense models and a limited query budget.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3131-3145"},"PeriodicalIF":6.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143599001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Le Cheng, Peican Zhu, Keke Tang, Chao Gao, Zhen Wang
{"title":"Efficient Source Detection in Incomplete Networks via Sensor Deployment and Source Approaching","authors":"Le Cheng, Peican Zhu, Keke Tang, Chao Gao, Zhen Wang","doi":"10.1109/tifs.2025.3550069","DOIUrl":"https://doi.org/10.1109/tifs.2025.3550069","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"5 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yihao Huang, Xin Luo, Qing Guo, Felix Juefei-Xu, Xiaojun Jia, Weikai Miao, Geguang Pu, Yang Liu
{"title":"Scale-Invariant Adversarial Attack against Arbitrary-scale Super-resolution","authors":"Yihao Huang, Xin Luo, Qing Guo, Felix Juefei-Xu, Xiaojun Jia, Weikai Miao, Geguang Pu, Yang Liu","doi":"10.1109/tifs.2025.3550079","DOIUrl":"https://doi.org/10.1109/tifs.2025.3550079","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"5 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junpeng Zhang;Hui Zhu;Jiaqi Zhao;Rongxing Lu;Yandong Zheng;Jiezhen Tang;Hui Li
{"title":"COKV: Key-Value Data Collection With Condensed Local Differential Privacy","authors":"Junpeng Zhang;Hui Zhu;Jiaqi Zhao;Rongxing Lu;Yandong Zheng;Jiezhen Tang;Hui Li","doi":"10.1109/TIFS.2025.3550064","DOIUrl":"10.1109/TIFS.2025.3550064","url":null,"abstract":"Local differential privacy (LDP) provides lightweight and provable privacy protection and has wide applications in private data collection. Key-value data, as a popular NoSQL structure, requires simultaneous frequency and mean estimations of each key, which poses a challenge to traditional LDP-based collection methods. Despite many schemes proposed for the privacy protection of key-value data, they inadequately solve the condensed perturbation for keys and the advanced combination of privacy budgets, leading to suboptimal estimation accuracy. To address this issue, we propose an efficient key-value collection scheme (COKV) with tight privacy budget composition. In our scheme, we first design a padding and sampling protocol for key-value data to avoid privacy budget splitting. Second, to enhance the utility of key perturbation, we design a key perturbation primitive and optimize the perturbation range to improve computational efficiency. After that, we propose a key-value association perturbation algorithm whose value perturbation strategy guarantees the output expectation equals the original value. Finally, we demonstrate that through a tight privacy budget composition, COKV can provide higher data utility under the same privacy level. Theoretical analysis shows that COKV possesses lower frequency and mean estimations variance. Extensive experiments on both synthetic and real-world datasets also indicate that COKV outperforms the current state-of-the-art methods for secure key-value data collection.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"3260-3273"},"PeriodicalIF":6.3,"publicationDate":"2025-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}