{"title":"通过在不同输入模式的中间层施加线性和扰动来增加可转移性","authors":"Meet Shah, Srimanta Mandal, Shruti Bhilare, Avik Hati Dhirubhai","doi":"10.1109/SPCOM55316.2022.9840512","DOIUrl":null,"url":null,"abstract":"Despite high prediction accuracy, deep networks are vulnerable to adversarial attacks, designed by inducing human-indiscernible perturbations to clean images. Hence, adversarial samples can mislead already trained deep networks. The process of generating adversarial examples can assist us in investigating the robustness of different models. Many developed adversarial attacks often fail under challenging black-box settings. Hence, it is required to improve transferability of adversarial attacks to an unknown model. In this aspect, we propose to increase the rate of transferability by inducing linearity in a few intermediate layers of architecture. The proposed design does not disturb the original architecture much. The design focuses on significance of intermediate layers in generating feature maps suitable for a task. By analyzing the intermediate feature maps of architecture, a particular layer can be more perturbed to improve the transferability. The performance is further enhanced by considering diverse input patterns. Experimental results demonstrate the success in increasing the transferability of our proposition.","PeriodicalId":246982,"journal":{"name":"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Increasing Transferability by Imposing Linearity and Perturbation in Intermediate Layer with Diverse Input Patterns\",\"authors\":\"Meet Shah, Srimanta Mandal, Shruti Bhilare, Avik Hati Dhirubhai\",\"doi\":\"10.1109/SPCOM55316.2022.9840512\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Despite high prediction accuracy, deep networks are vulnerable to adversarial attacks, designed by inducing human-indiscernible perturbations to clean images. Hence, adversarial samples can mislead already trained deep networks. The process of generating adversarial examples can assist us in investigating the robustness of different models. Many developed adversarial attacks often fail under challenging black-box settings. Hence, it is required to improve transferability of adversarial attacks to an unknown model. In this aspect, we propose to increase the rate of transferability by inducing linearity in a few intermediate layers of architecture. The proposed design does not disturb the original architecture much. The design focuses on significance of intermediate layers in generating feature maps suitable for a task. By analyzing the intermediate feature maps of architecture, a particular layer can be more perturbed to improve the transferability. The performance is further enhanced by considering diverse input patterns. Experimental results demonstrate the success in increasing the transferability of our proposition.\",\"PeriodicalId\":246982,\"journal\":{\"name\":\"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)\",\"volume\":\"3 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-07-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SPCOM55316.2022.9840512\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Signal Processing and Communications (SPCOM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SPCOM55316.2022.9840512","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Increasing Transferability by Imposing Linearity and Perturbation in Intermediate Layer with Diverse Input Patterns
Despite high prediction accuracy, deep networks are vulnerable to adversarial attacks, designed by inducing human-indiscernible perturbations to clean images. Hence, adversarial samples can mislead already trained deep networks. The process of generating adversarial examples can assist us in investigating the robustness of different models. Many developed adversarial attacks often fail under challenging black-box settings. Hence, it is required to improve transferability of adversarial attacks to an unknown model. In this aspect, we propose to increase the rate of transferability by inducing linearity in a few intermediate layers of architecture. The proposed design does not disturb the original architecture much. The design focuses on significance of intermediate layers in generating feature maps suitable for a task. By analyzing the intermediate feature maps of architecture, a particular layer can be more perturbed to improve the transferability. The performance is further enhanced by considering diverse input patterns. Experimental results demonstrate the success in increasing the transferability of our proposition.