{"title":"鲁棒图像分类的自监督解纠缠嵌入","authors":"Lanqi Liu, Zhenyu Duan, Guozheng Xu, Yi Xu","doi":"10.1109/ICIP42928.2021.9506493","DOIUrl":null,"url":null,"abstract":"Recently, the security of deep learning algorithms against adversarial samples has been widely recognized. Most of the existing defense methods only consider the attack influence on image level, while the effect of correlation among feature components has not been investigated. In fact, when one feature component is successfully attacked, its correlated components can be attacked with higher probability. In this paper, a self-supervised disentanglement based defense framework is proposed, providing a general tool to disentangle features by greatly reducing correlation among feature components, thus significantly improving the robustness of the classification network. The proposed framework reveals the important role of disentangled embedding in defending adversarial samples. Extensive experiments on several benchmark datasets validate that the proposed defense framework consistently presents its robustness against extensive adversarial attacks. Also, the proposed model can be applied to any typical defense method as a good promotion strategy.","PeriodicalId":314429,"journal":{"name":"2021 IEEE International Conference on Image Processing (ICIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Self-Supervised Disentangled Embedding For Robust Image Classification\",\"authors\":\"Lanqi Liu, Zhenyu Duan, Guozheng Xu, Yi Xu\",\"doi\":\"10.1109/ICIP42928.2021.9506493\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently, the security of deep learning algorithms against adversarial samples has been widely recognized. Most of the existing defense methods only consider the attack influence on image level, while the effect of correlation among feature components has not been investigated. In fact, when one feature component is successfully attacked, its correlated components can be attacked with higher probability. In this paper, a self-supervised disentanglement based defense framework is proposed, providing a general tool to disentangle features by greatly reducing correlation among feature components, thus significantly improving the robustness of the classification network. The proposed framework reveals the important role of disentangled embedding in defending adversarial samples. Extensive experiments on several benchmark datasets validate that the proposed defense framework consistently presents its robustness against extensive adversarial attacks. Also, the proposed model can be applied to any typical defense method as a good promotion strategy.\",\"PeriodicalId\":314429,\"journal\":{\"name\":\"2021 IEEE International Conference on Image Processing (ICIP)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Image Processing (ICIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIP42928.2021.9506493\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Image Processing (ICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP42928.2021.9506493","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Self-Supervised Disentangled Embedding For Robust Image Classification
Recently, the security of deep learning algorithms against adversarial samples has been widely recognized. Most of the existing defense methods only consider the attack influence on image level, while the effect of correlation among feature components has not been investigated. In fact, when one feature component is successfully attacked, its correlated components can be attacked with higher probability. In this paper, a self-supervised disentanglement based defense framework is proposed, providing a general tool to disentangle features by greatly reducing correlation among feature components, thus significantly improving the robustness of the classification network. The proposed framework reveals the important role of disentangled embedding in defending adversarial samples. Extensive experiments on several benchmark datasets validate that the proposed defense framework consistently presents its robustness against extensive adversarial attacks. Also, the proposed model can be applied to any typical defense method as a good promotion strategy.