{"title":"理解深度神经网络中能量与对抗鲁棒性的权衡","authors":"Kyungmi Lee, A. Chandrakasan","doi":"10.1109/SiPS52927.2021.00017","DOIUrl":null,"url":null,"abstract":"Adversarial examples, which are crafted by adding small inconspicuous perturbations to typical inputs in order to fool the prediction of a deep neural network (DNN), can pose a threat to security-critical applications, and robustness against adversarial examples is becoming an important factor for designing a DNN. In this work, we first examine the methodology for evaluating adversarial robustness that uses the first-order attack methods, and analyze three cases when this evaluation methodology overestimates robustness: 1) numerical saturation of cross-entropy loss, 2) non-differentiable functions in DNNs, and 3) ineffective initialization of the attack methods. For each case, we propose compensation methods that can be easily combined with the existing attack methods, thus provide a more precise evaluation methodology for robustness. Second, we benchmark the relationship between adversarial robustness and inference-time energy at an embedded hardware platform using our proposed evaluation methodology, and demonstrate that this relationship can be obscured by the three cases behind overestimation. Overall, our work shows that the robustness-energy trade-off has differences from the conventional accuracy-energy trade-off, and highlights importance of the precise evaluation methodology for robustness.","PeriodicalId":103894,"journal":{"name":"2021 IEEE Workshop on Signal Processing Systems (SiPS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks\",\"authors\":\"Kyungmi Lee, A. Chandrakasan\",\"doi\":\"10.1109/SiPS52927.2021.00017\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Adversarial examples, which are crafted by adding small inconspicuous perturbations to typical inputs in order to fool the prediction of a deep neural network (DNN), can pose a threat to security-critical applications, and robustness against adversarial examples is becoming an important factor for designing a DNN. In this work, we first examine the methodology for evaluating adversarial robustness that uses the first-order attack methods, and analyze three cases when this evaluation methodology overestimates robustness: 1) numerical saturation of cross-entropy loss, 2) non-differentiable functions in DNNs, and 3) ineffective initialization of the attack methods. For each case, we propose compensation methods that can be easily combined with the existing attack methods, thus provide a more precise evaluation methodology for robustness. Second, we benchmark the relationship between adversarial robustness and inference-time energy at an embedded hardware platform using our proposed evaluation methodology, and demonstrate that this relationship can be obscured by the three cases behind overestimation. Overall, our work shows that the robustness-energy trade-off has differences from the conventional accuracy-energy trade-off, and highlights importance of the precise evaluation methodology for robustness.\",\"PeriodicalId\":103894,\"journal\":{\"name\":\"2021 IEEE Workshop on Signal Processing Systems (SiPS)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Workshop on Signal Processing Systems (SiPS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SiPS52927.2021.00017\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Workshop on Signal Processing Systems (SiPS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SiPS52927.2021.00017","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Understanding the Energy vs. Adversarial Robustness Trade-Off in Deep Neural Networks
Adversarial examples, which are crafted by adding small inconspicuous perturbations to typical inputs in order to fool the prediction of a deep neural network (DNN), can pose a threat to security-critical applications, and robustness against adversarial examples is becoming an important factor for designing a DNN. In this work, we first examine the methodology for evaluating adversarial robustness that uses the first-order attack methods, and analyze three cases when this evaluation methodology overestimates robustness: 1) numerical saturation of cross-entropy loss, 2) non-differentiable functions in DNNs, and 3) ineffective initialization of the attack methods. For each case, we propose compensation methods that can be easily combined with the existing attack methods, thus provide a more precise evaluation methodology for robustness. Second, we benchmark the relationship between adversarial robustness and inference-time energy at an embedded hardware platform using our proposed evaluation methodology, and demonstrate that this relationship can be obscured by the three cases behind overestimation. Overall, our work shows that the robustness-energy trade-off has differences from the conventional accuracy-energy trade-off, and highlights importance of the precise evaluation methodology for robustness.