Hyunha Hwang, Se-Hun Kim, Mincheol Cha, Min-Ho Choi, Kyujoong Lee, Hyuk-Jae Lee
{"title":"基于腐败鲁棒性的特征去噪效果分析","authors":"Hyunha Hwang, Se-Hun Kim, Mincheol Cha, Min-Ho Choi, Kyujoong Lee, Hyuk-Jae Lee","doi":"10.1109/ITC-CSCC58803.2023.10212895","DOIUrl":null,"url":null,"abstract":"Adversarial attack is a method that aims to cause incorrect predictions in a deep learning model by making slight perturbations to the input. As a result of this vulnerability, various studies have been conducted to improve adversarial robustness. However, deep learning models are also vulnerable to distribution mismatch between training data and test data. This mismatch can occur due to natural corruption in test data. Research on corruption robustness has been less explored compared to adversarial robustness. This paper analyzes the effect of feature denoising network, which is for improving adversarial robustness, in terms of corruption robustness. Experimental results show that feature denoising network can also lead to improved robustness against common corruptions.","PeriodicalId":220939,"journal":{"name":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysis of the Effect of Feature Denoising from the Perspective of Corruption Robustness\",\"authors\":\"Hyunha Hwang, Se-Hun Kim, Mincheol Cha, Min-Ho Choi, Kyujoong Lee, Hyuk-Jae Lee\",\"doi\":\"10.1109/ITC-CSCC58803.2023.10212895\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial attack is a method that aims to cause incorrect predictions in a deep learning model by making slight perturbations to the input. As a result of this vulnerability, various studies have been conducted to improve adversarial robustness. However, deep learning models are also vulnerable to distribution mismatch between training data and test data. This mismatch can occur due to natural corruption in test data. Research on corruption robustness has been less explored compared to adversarial robustness. This paper analyzes the effect of feature denoising network, which is for improving adversarial robustness, in terms of corruption robustness. Experimental results show that feature denoising network can also lead to improved robustness against common corruptions.\",\"PeriodicalId\":220939,\"journal\":{\"name\":\"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ITC-CSCC58803.2023.10212895\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Technical Conference on Circuits/Systems, Computers, and Communications (ITC-CSCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ITC-CSCC58803.2023.10212895","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Analysis of the Effect of Feature Denoising from the Perspective of Corruption Robustness
Adversarial attack is a method that aims to cause incorrect predictions in a deep learning model by making slight perturbations to the input. As a result of this vulnerability, various studies have been conducted to improve adversarial robustness. However, deep learning models are also vulnerable to distribution mismatch between training data and test data. This mismatch can occur due to natural corruption in test data. Research on corruption robustness has been less explored compared to adversarial robustness. This paper analyzes the effect of feature denoising network, which is for improving adversarial robustness, in terms of corruption robustness. Experimental results show that feature denoising network can also lead to improved robustness against common corruptions.