{"title":"ALAT:对抗性标签引导对抗性训练","authors":"Nan Wang , Yong Yu , Honghong Wang","doi":"10.1016/j.patrec.2025.06.012","DOIUrl":null,"url":null,"abstract":"<div><div>Adversarial training is a widely used defense method in deep neural networks that enhances the model’s ability to detect perturbations and improve network robustness. Previous studies have assessed adversarial attack performance from various angles, leading to enhancements in adversarial training to bolster network robustness. However, while some research has explored the effectiveness of misclassified data in improving adversarial training, they have ignored the importance of the adversarial predicted labels. We observe that adversarial sample prediction labels often correspond to high probability categories in natural predictions. This paper proposes a new adversarial training method called <strong>Adversarial Label-guided Adversarial Training (ALAT)</strong>. This method incorporates an additional regularization term that integrates adversarial prediction labels into the training process, guiding predictions closer to true labels and away from adversarial labels. Extensive experiments confirm its effectiveness.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 250-256"},"PeriodicalIF":3.3000,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ALAT: Adversarial Label-guided Adversarial Training\",\"authors\":\"Nan Wang , Yong Yu , Honghong Wang\",\"doi\":\"10.1016/j.patrec.2025.06.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Adversarial training is a widely used defense method in deep neural networks that enhances the model’s ability to detect perturbations and improve network robustness. Previous studies have assessed adversarial attack performance from various angles, leading to enhancements in adversarial training to bolster network robustness. However, while some research has explored the effectiveness of misclassified data in improving adversarial training, they have ignored the importance of the adversarial predicted labels. We observe that adversarial sample prediction labels often correspond to high probability categories in natural predictions. This paper proposes a new adversarial training method called <strong>Adversarial Label-guided Adversarial Training (ALAT)</strong>. This method incorporates an additional regularization term that integrates adversarial prediction labels into the training process, guiding predictions closer to true labels and away from adversarial labels. Extensive experiments confirm its effectiveness.</div></div>\",\"PeriodicalId\":54638,\"journal\":{\"name\":\"Pattern Recognition Letters\",\"volume\":\"196 \",\"pages\":\"Pages 250-256\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2025-07-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167865525002405\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525002405","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
ALAT: Adversarial Label-guided Adversarial Training
Adversarial training is a widely used defense method in deep neural networks that enhances the model’s ability to detect perturbations and improve network robustness. Previous studies have assessed adversarial attack performance from various angles, leading to enhancements in adversarial training to bolster network robustness. However, while some research has explored the effectiveness of misclassified data in improving adversarial training, they have ignored the importance of the adversarial predicted labels. We observe that adversarial sample prediction labels often correspond to high probability categories in natural predictions. This paper proposes a new adversarial training method called Adversarial Label-guided Adversarial Training (ALAT). This method incorporates an additional regularization term that integrates adversarial prediction labels into the training process, guiding predictions closer to true labels and away from adversarial labels. Extensive experiments confirm its effectiveness.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.