Yanchun Li;Long Huang;Shujuan Tian;Haolin Liu;Zhetao Li
{"title":"面向开集对抗防御的鲁棒生成自适应网络","authors":"Yanchun Li;Long Huang;Shujuan Tian;Haolin Liu;Zhetao Li","doi":"10.1109/TIFS.2025.3529311","DOIUrl":null,"url":null,"abstract":"In open-set recognition scenarios, deep learning models are required to handle samples from unknown categories, which better reflects real-world conditions. However, this task poses significant challenges to current closed-set recognition models, and the emergence of adversarial samples further exacerbates the issue. Existing open-set adversarial defense methods still lack a comprehensive exploration of model architectures, and the efficacy of adversarial training methods remains suboptimal in generalizing to various types of noise. In this paper, we propose a novel network called the Robust Generative Adaptation Network (RGAN), which enhances closed-set recognition accuracy and open-set detection performance by optimizing the model architecture for open-set adversarial defense. We optimize the robust block that can be embedded within deep learning models to constrain the propagation effects of adversarial attacks, thereby enhancing the model’s robustness. Simultaneously, we employ a noise generator to create perturbations tailored to specific adversarial samples and leverage these perturbations to increase the model’s generalization ability to different forms of noise. We conduct comprehensive experiments on five widely used datasets and various classification architectures, and the experimental results demonstrate that our RGAN achieves State-Of-The-Art (SOTA) performance in open-set adversarial defense tasks. The code and models are available at <uri>https://github.com/ycLi-CV/RGAN-main</uri>.","PeriodicalId":13492,"journal":{"name":"IEEE Transactions on Information Forensics and Security","volume":"20 ","pages":"1649-1664"},"PeriodicalIF":6.3000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Generative Adaptation Network for Open-Set Adversarial Defense\",\"authors\":\"Yanchun Li;Long Huang;Shujuan Tian;Haolin Liu;Zhetao Li\",\"doi\":\"10.1109/TIFS.2025.3529311\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In open-set recognition scenarios, deep learning models are required to handle samples from unknown categories, which better reflects real-world conditions. However, this task poses significant challenges to current closed-set recognition models, and the emergence of adversarial samples further exacerbates the issue. Existing open-set adversarial defense methods still lack a comprehensive exploration of model architectures, and the efficacy of adversarial training methods remains suboptimal in generalizing to various types of noise. In this paper, we propose a novel network called the Robust Generative Adaptation Network (RGAN), which enhances closed-set recognition accuracy and open-set detection performance by optimizing the model architecture for open-set adversarial defense. We optimize the robust block that can be embedded within deep learning models to constrain the propagation effects of adversarial attacks, thereby enhancing the model’s robustness. Simultaneously, we employ a noise generator to create perturbations tailored to specific adversarial samples and leverage these perturbations to increase the model’s generalization ability to different forms of noise. We conduct comprehensive experiments on five widely used datasets and various classification architectures, and the experimental results demonstrate that our RGAN achieves State-Of-The-Art (SOTA) performance in open-set adversarial defense tasks. The code and models are available at <uri>https://github.com/ycLi-CV/RGAN-main</uri>.\",\"PeriodicalId\":13492,\"journal\":{\"name\":\"IEEE Transactions on Information Forensics and Security\",\"volume\":\"20 \",\"pages\":\"1649-1664\"},\"PeriodicalIF\":6.3000,\"publicationDate\":\"2025-01-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Information Forensics and Security\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10839470/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, THEORY & METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Information Forensics and Security","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10839470/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
Robust Generative Adaptation Network for Open-Set Adversarial Defense
In open-set recognition scenarios, deep learning models are required to handle samples from unknown categories, which better reflects real-world conditions. However, this task poses significant challenges to current closed-set recognition models, and the emergence of adversarial samples further exacerbates the issue. Existing open-set adversarial defense methods still lack a comprehensive exploration of model architectures, and the efficacy of adversarial training methods remains suboptimal in generalizing to various types of noise. In this paper, we propose a novel network called the Robust Generative Adaptation Network (RGAN), which enhances closed-set recognition accuracy and open-set detection performance by optimizing the model architecture for open-set adversarial defense. We optimize the robust block that can be embedded within deep learning models to constrain the propagation effects of adversarial attacks, thereby enhancing the model’s robustness. Simultaneously, we employ a noise generator to create perturbations tailored to specific adversarial samples and leverage these perturbations to increase the model’s generalization ability to different forms of noise. We conduct comprehensive experiments on five widely used datasets and various classification architectures, and the experimental results demonstrate that our RGAN achieves State-Of-The-Art (SOTA) performance in open-set adversarial defense tasks. The code and models are available at https://github.com/ycLi-CV/RGAN-main.
期刊介绍:
The IEEE Transactions on Information Forensics and Security covers the sciences, technologies, and applications relating to information forensics, information security, biometrics, surveillance and systems applications that incorporate these features