IEEE Transactions on Information Forensics and Security最新文献

筛选
英文 中文
Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions
IF 6.8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-30 DOI: 10.1109/tifs.2025.3536277
Gokularam Muthukrishnan, Sheetal Kalyani
{"title":"Differential Privacy with Higher Utility by Exploiting Coordinate-wise Disparity: Laplace Mechanism Can Beat Gaussian in High Dimensions","authors":"Gokularam Muthukrishnan, Sheetal Kalyani","doi":"10.1109/tifs.2025.3536277","DOIUrl":"https://doi.org/10.1109/tifs.2025.3536277","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"33 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust Duality Learning for Unsupervised Visible-Infrared Person Re-Identification
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-30 DOI: 10.1109/TIFS.2025.3536613
Yongxiang Li;Yuan Sun;Yang Qin;Dezhong Peng;Xi Peng;Peng Hu
{"title":"Robust Duality Learning for Unsupervised Visible-Infrared Person Re-Identification","authors":"Yongxiang Li;Yuan Sun;Yang Qin;Dezhong Peng;Xi Peng;Peng Hu","doi":"10.1109/TIFS.2025.3536613","DOIUrl":"10.1109/TIFS.2025.3536613","url":null,"abstract":"Unsupervised visible-infrared person re-identification (UVI-ReID) aims at retrieving pedestrian images of the same individual across distinct modalities, presenting challenges due to the inherent heterogeneity gap and the absence of cost-prohibitive annotations. Although existing methods employ self-training with clustering-generated pseudo-labels to bridge this gap, they always implicitly assume that these pseudo-labels are predicted correctly. In practice, however, this presumption is impossible to satisfy due to the difficulty of training a perfect model let alone without any ground truths, resulting in pseudo-labeling errors. Based on the observation, this study introduces a new learning paradigm for UVI-ReID considering Pseudo-Label Noise (PLN), which encompasses three challenges: noise overfitting, error accumulation, and noisy cluster correspondence. To conquer these challenges, we propose a novel robust duality learning framework (RoDE) for UVI-ReID to mitigate the adverse impact of noisy pseudo-labels. Specifically, for noise overfitting, we propose a novel Robust Adaptive Learning mechanism (RAL) to dynamically prioritize clean samples while deprioritizing noisy ones, thus avoiding overemphasizing noise. To circumvent error accumulation of self-training, where the model tends to confirm its mistakes, RoDE alternately trains dual distinct models using pseudo-labels predicted by their counterparts, thereby maintaining diversity and avoiding collapse into noise. However, this will lead to cross-cluster misalignment between the two distinct models, not to mention the misalignment between different modalities, resulting in dual noisy cluster correspondence and thus difficult to optimize. To address this issue, a Cluster Consistency Matching mechanism (CCM) is presented to ensure reliable alignment across distinct modalities as well as across different models by leveraging cross-cluster similarities. Extensive experiments on three benchmark datasets demonstrate the effectiveness of the proposed RoDE.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1937-1948"},"PeriodicalIF":6.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adversarial Example Soups: Improving Transferability and Stealthiness for Free
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-30 DOI: 10.1109/TIFS.2025.3536611
Bo Yang;Hengwei Zhang;Jindong Wang;Yulong Yang;Chenhao Lin;Chao Shen;Zhengyu Zhao
{"title":"Adversarial Example Soups: Improving Transferability and Stealthiness for Free","authors":"Bo Yang;Hengwei Zhang;Jindong Wang;Yulong Yang;Chenhao Lin;Chao Shen;Zhengyu Zhao","doi":"10.1109/TIFS.2025.3536611","DOIUrl":"10.1109/TIFS.2025.3536611","url":null,"abstract":"Transferable adversarial examples cause practical security risks since they can mislead a target model without knowing its internal knowledge. A conventional recipe for maximizing transferability is to keep only the optimal adversarial example from all those obtained in the optimization pipeline. In this paper, for the first time, we revisit this convention and demonstrate that those discarded, sub-optimal adversarial examples can be reused to boost transferability. Specifically, we propose “Adversarial Example Soups” (AES), with AES-tune for averaging discarded adversarial examples in hyperparameter tuning and AES-rand for stability testing. In addition, our AES is inspired by “model soups”, which averages weights of multiple fine-tuned models for improved accuracy without increasing inference time. Extensive experiments validate the global effectiveness of our AES, boosting 10 state-of-the-art transfer attacks and their combinations by up to 13% against 10 diverse (defensive) target models. We also show the possibility of generalizing AES to other types, e.g., directly averaging multiple in-the-wild adversarial examples that yield comparable success. A promising byproduct of AES is the improved stealthiness of adversarial examples since the perturbation variances are naturally reduced.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1882-1894"},"PeriodicalIF":6.3,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Federated Learning Robustness using Locally Benignity-Assessable Bayesian Dropout
IF 6.8 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-30 DOI: 10.1109/tifs.2025.3536777
Jingjing Xue, Sheng Sun, Min Liu, Qi Li, Ke Xu
{"title":"Enhancing Federated Learning Robustness using Locally Benignity-Assessable Bayesian Dropout","authors":"Jingjing Xue, Sheng Sun, Min Liu, Qi Li, Ke Xu","doi":"10.1109/tifs.2025.3536777","DOIUrl":"https://doi.org/10.1109/tifs.2025.3536777","url":null,"abstract":"","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"77 3 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143072208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Protecting Your Attention During Distributed Graph Learning: Efficient Privacy-Preserving Federated Graph Attention Network
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-29 DOI: 10.1109/TIFS.2025.3536612
Jinhao Zhou;Jun Wu;Jianbing Ni;Yuntao Wang;Yanghe Pan;Zhou Su
{"title":"Protecting Your Attention During Distributed Graph Learning: Efficient Privacy-Preserving Federated Graph Attention Network","authors":"Jinhao Zhou;Jun Wu;Jianbing Ni;Yuntao Wang;Yanghe Pan;Zhou Su","doi":"10.1109/TIFS.2025.3536612","DOIUrl":"10.1109/TIFS.2025.3536612","url":null,"abstract":"Federated graph attention networks (FGATs) are gaining prominence for enabling collaborative and privacy-preserving graph model training. The attention mechanisms in FGATs enhance the focus on crucial graph features for improved graph representation learning while maintaining data decentralization. However, these mechanisms inherently process sensitive information, which is vulnerable to privacy threats like graph reconstruction and attribute inference. Additionally, their role in assigning varying and changing importance to nodes challenges traditional privacy methods to balance privacy and utility across varied node sensitivities effectively. Our study fills this gap by proposing an efficient privacy-preserving FGAT (PFGAT). We present an attention-based dynamic differential privacy (DP) approach via an improved multiplication triplet (IMT). Specifically, we first propose an IMT mechanism that leverages a reusable triplet generation method to efficiently and securely compute the attention mechanism. Second, we employ an attention-based privacy budget that dynamically adjusts privacy levels according to node data significance, optimizing the privacy-utility trade-off. Third, the proposed hybrid neighbor aggregation algorithm tailors DP mechanisms according to the unique characteristics of neighbor nodes, thereby mitigating the adverse impact of DP on graph attention network (GAT) utility. Extensive experiments on benchmarking datasets confirm that PFGAT maintains high efficiency and ensures robust privacy protection against potential threats.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1949-1964"},"PeriodicalIF":6.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imprints: Mitigating Watermark Removal Attacks With Defensive Watermarks
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-29 DOI: 10.1109/TIFS.2025.3536299
Xiaofu Chen;Jiangyi Deng;Yanjiao Chen;Chaohao Li;Xin Fang;Cong Liu;Wenyuan Xu
{"title":"Imprints: Mitigating Watermark Removal Attacks With Defensive Watermarks","authors":"Xiaofu Chen;Jiangyi Deng;Yanjiao Chen;Chaohao Li;Xin Fang;Cong Liu;Wenyuan Xu","doi":"10.1109/TIFS.2025.3536299","DOIUrl":"10.1109/TIFS.2025.3536299","url":null,"abstract":"Watermark is essential for protecting the intellectual property of private images. However, a wide range of watermark removal attacks, especially many AI-powered ones, can automatically predict and remove watermarks, posing serious concerns. In this paper, we present the design of <sc>Imprints</small>, a defensive watermarking framework that fortifies watermarks against watermark removal attacks. By formulating an optimization problem that deters watermark removal attacks, we design image-independent/dependent defensive watermark models for effective batch/customized protection. We further enhance the watermark to be transferable to unseen watermark removal attacks and robust to editing distortions. Extensive experiments verify that <sc>Imprints</small> outperforms existing baselines in terms of its immunity to 8 state-of-the-art watermark removal attacks and 3 commercial black-box watermark removal software. The source code is available at <uri>https://github.com/Imprints-wm/Imprints</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1866-1881"},"PeriodicalIF":6.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CILP-FGDI: Exploiting Vision-Language Model for Generalizable Person Re-Identification
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-29 DOI: 10.1109/TIFS.2025.3536608
Huazhong Zhao;Lei Qi;Xin Geng
{"title":"CILP-FGDI: Exploiting Vision-Language Model for Generalizable Person Re-Identification","authors":"Huazhong Zhao;Lei Qi;Xin Geng","doi":"10.1109/TIFS.2025.3536608","DOIUrl":"10.1109/TIFS.2025.3536608","url":null,"abstract":"The Visual Language Model, known for its robust cross-modal capabilities, has been extensively applied in various computer vision tasks. In this paper, we explore the use of CLIP (Contrastive Language-Image Pretraining), a vision-language model pretrained on large-scale image-text pairs to align visual and textual features, for acquiring fine-grained and domain-invariant representations in generalizable person re-identification. The adaptation of CLIP to the task presents two primary challenges: learning more fine-grained features to enhance discriminative ability, and learning more domain-invariant features to improve the model’s generalization capabilities. To mitigate the first challenge thereby enhance the ability to learn fine-grained features, a three-stage strategy is proposed to boost the accuracy of text descriptions. Initially, the image encoder is trained to effectively adapt to person re-identification tasks. In the second stage, the features extracted by the image encoder are used to generate textual descriptions (i.e., prompts) for each image. Finally, the text encoder with the learned prompts is employed to guide the training of the final image encoder. To enhance the model’s generalization capabilities to unseen domains, a bidirectional guiding method is introduced to learn domain-invariant image features. Specifically, domain-invariant and domain-relevant prompts are generated, and both positive (i.e., pulling together image features and domain-invariant prompts) and negative (i.e., pushing apart image features and domain-relevant prompts) views are used to train the image encoder. Collectively, these strategies contribute to the development of an innovative CLIP-based framework for learning fine-grained generalized features in person re-identification. The effectiveness of the proposed method is validated through a comprehensive series of experiments conducted on multiple benchmarks. Our code is available at <uri>https://github.com/Qi5Lei/CLIP-FGDI</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2132-2142"},"PeriodicalIF":6.3,"publicationDate":"2025-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143056203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
All Points Guided Adversarial Generator for Targeted Attack Against Deep Hashing Retrieval
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-27 DOI: 10.1109/TIFS.2025.3534585
Rongxin Tu;Xiangui Kang;Chee Wei Tan;Chi-Hung Chi;Kwok-Yan Lam
{"title":"All Points Guided Adversarial Generator for Targeted Attack Against Deep Hashing Retrieval","authors":"Rongxin Tu;Xiangui Kang;Chee Wei Tan;Chi-Hung Chi;Kwok-Yan Lam","doi":"10.1109/TIFS.2025.3534585","DOIUrl":"10.1109/TIFS.2025.3534585","url":null,"abstract":"Deep hashing has been widely used in image retrieval tasks, while deep hashing networks are vulnerable to adversarial example attacks. To improve the deep hashing networks’ robustness, it is essential to investigate adversarial attacks on the networks, especially targeted attacks. Among the existing targeted attacks for hashing, the generation-based targeted attack methods have attracted increasing attention due to their efficiency in generating adversarial examples. However, these methods supervise the generation of adversarial examples solely with the hash codes of positive samples, without employing the hash codes of all points in the training set to directly participate in supervisory training, thereby making the attack less effective. Since the hash codes of the training set samples are generated by a well-trained hashing model, these hash codes retain rich semantic information of their corresponding samples, highlighting the necessity of sufficiently utilizing them. Therefore, in this paper, we propose a targeted attack method that utilizes all points’ hash codes in the training set to guide the generation of adversarial attack examples directly. Specifically, we first decode the target label to obtain the corresponding feature map. Then, we concatenate the feature map with the query image and feed them into an encoder-decoder network that employs a skip-connection strategy to obtain a perturbed example. Furthermore, to guide adversarial example generation, we introduce a loss function that exploits the similarities between the perturbed example’s hash code and all points’ hash codes in the training set, thereby making sufficient utilization of the rich semantic information in these hash codes. Experimental results illustrate that our method outperforms the state-of-the-art targeted attack methods in targeted attack effectiveness and transferability. The code is available at <uri>https://github.com/rongxintu3/APGA</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1695-1709"},"PeriodicalIF":6.3,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Query Correlation Attack Against Searchable Symmetric Encryption With Supporting for Conjunctive Queries
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-27 DOI: 10.1109/TIFS.2025.3530692
Hanyong Liu;Lei Xu;Xiaoning Liu;Lin Mei;Chungen Xu
{"title":"Query Correlation Attack Against Searchable Symmetric Encryption With Supporting for Conjunctive Queries","authors":"Hanyong Liu;Lei Xu;Xiaoning Liu;Lin Mei;Chungen Xu","doi":"10.1109/TIFS.2025.3530692","DOIUrl":"10.1109/TIFS.2025.3530692","url":null,"abstract":"Searchable symmetric encryption (SSE) supporting conjunctive queries has garnered significant attention over the past decade due to its practicality and wide applicability. While extensive research has addressed common leakages, such as the access pattern and search pattern, efforts to mitigate these vulnerabilities have primarily focused on structural issues inherent to scheme construction. In this work, we shift the focus to a less explored yet critical leakage stemming from users’ inherent querying behaviors: query correlation. Originally introduced by Grubbs et al. [USENIX SEC’20], formally defined by Oya and Kerschbaum [USENIX SEC’22], and leveraged to mount a high-success query recovery attack against single-keyword SSE, query correlation raises a crucial question: does it pose a similar threat to the security of conjunctive SSE? To tackle this issue, we undertake two key efforts. First, we generalize the notion of query correlation in the context of conjunctive SSE, introducing the “generalized query correlation pattern”, which captures the co-occurrence relationships among queried tokens within a conjunctive query. Second, we develop a new passive query recovery attack, QCCK, which exploits both the search pattern and generalized query correlation pattern to infer the mapping between tokens and keywords. Comprehensive evaluations on the Enron dataset confirm QCCK’s efficacy, achieving a query recovery rate of approximately 80% with a keyword universe size ranging from 200 to 1000 and an observed query size between 5000 and 50,000. These findings highlight the significant threat posed by query correlation in conjunctive SSE and underscore the urgent need for robust countermeasures.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1924-1936"},"PeriodicalIF":6.3,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049795","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semantic Entropy Can Simultaneously Benefit Transmission Efficiency and Channel Security of Wireless Semantic Communications
IF 6.3 1区 计算机科学
IEEE Transactions on Information Forensics and Security Pub Date : 2025-01-27 DOI: 10.1109/TIFS.2025.3534562
Yankai Rong;Guoshun Nan;Minwei Zhang;Sihan Chen;Songtao Wang;Xuefei Zhang;Nan Ma;Shixun Gong;Zhaohui Yang;Qimei Cui;Xiaofeng Tao;Tony Q. S. Quek
{"title":"Semantic Entropy Can Simultaneously Benefit Transmission Efficiency and Channel Security of Wireless Semantic Communications","authors":"Yankai Rong;Guoshun Nan;Minwei Zhang;Sihan Chen;Songtao Wang;Xuefei Zhang;Nan Ma;Shixun Gong;Zhaohui Yang;Qimei Cui;Xiaofeng Tao;Tony Q. S. Quek","doi":"10.1109/TIFS.2025.3534562","DOIUrl":"10.1109/TIFS.2025.3534562","url":null,"abstract":"Recently proliferated deep learning-based semantic communications (DLSC) focus on how transmitted symbols efficiently convey a desired meaning to the destination. However, the sensitivity of neural models and the openness of wireless channels cause the DLSC system to be extremely fragile to various malicious attacks. This inspires us to ask a question: “Can we further exploit the advantages of transmission efficiency in wireless semantic communications while also alleviating its security disadvantages?”. Keeping this in mind, we propose SemEntropy, a novel method that answers the above question by exploring the semantics of data for both adaptive transmission and physical layer encryption. Specifically, we first introduce semantic entropy, which indicates the expectation of various semantic scores regarding the transmission goal of the DLSC. Equipped with such semantic entropy, we can dynamically assign informative semantics to Orthogonal Frequency Division Multiplexing (OFDM) subcarriers with better channel conditions in a fine-grained manner. We also use the entropy to guide semantic key generation to safeguard communications over open wireless channels. By doing so, both transmission efficiency and channel security can be simultaneously improved. Extensive experiments over various benchmarks show the effectiveness of the proposed SemEntropy. We discuss the reason why our proposed method benefits secure transmission of DLSC, and also give some interesting findings, e.g., SemEntropy can keep the semantic accuracy remain 95% with 60% less transmission.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"2067-2082"},"PeriodicalIF":6.3,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143049803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信