{"title":"Support samples guided adversarial generalization","authors":"En Yang, Tong Sun, Jun Liu","doi":"10.1117/12.2667635","DOIUrl":null,"url":null,"abstract":"Adversarial training proves to be the most effective measure to classify adversarial perturbation, which is imperceptible but can drastically alter the output of the classifier. We review various theories behind the relationship between generalization gap and adversarial robustness and then raise the question: is it the input near the decision boundary that provides guidance for the classifier to learn the ideal decision boundary and therefore yield a more desired outcome? We provide quantitative confirmation that the expected required sample size correlates favorably with sample distance and further investigate the relationship between the robust classification error and the expected distance from the decision boundary to samples. Experimental results reveal that applying the data near the decision boundary as training sets can significantly promote adversarial generalization, which keeps consistence with the main conjectures presented in this work.","PeriodicalId":128051,"journal":{"name":"Third International Seminar on Artificial Intelligence, Networking, and Information Technology","volume":"197 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third International Seminar on Artificial Intelligence, Networking, and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2667635","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Adversarial training proves to be the most effective measure to classify adversarial perturbation, which is imperceptible but can drastically alter the output of the classifier. We review various theories behind the relationship between generalization gap and adversarial robustness and then raise the question: is it the input near the decision boundary that provides guidance for the classifier to learn the ideal decision boundary and therefore yield a more desired outcome? We provide quantitative confirmation that the expected required sample size correlates favorably with sample distance and further investigate the relationship between the robust classification error and the expected distance from the decision boundary to samples. Experimental results reveal that applying the data near the decision boundary as training sets can significantly promote adversarial generalization, which keeps consistence with the main conjectures presented in this work.