{"title":"DeepFeature:利用鲁棒特征指导深度神经网络系统的对抗测试","authors":"Lichao Feng , Xingya Wang , Shiyu Zhang , Zhihong Zhao","doi":"10.1016/j.jss.2024.112201","DOIUrl":null,"url":null,"abstract":"<div><p>With the deployment of Deep Neural Network (DNN) systems in security-critical fields, more and more researchers are concerned about DNN robustness. Unfortunately, DNNs are vulnerable to adversarial attacks and produce completely wrong outputs. This inspired numerous testing works devoted to improving the adversarial robustness of DNNs. Coverage and uncertainty criteria were proposed to guide sample selections for DNN retraining. However, they are greatly limited to evaluating DNN abnormal behaviors rather than locating the root cause of adversarial vulnerability. This work aims to bridge this gap. We propose an adversarial testing framework, DeepFeature, using robust features. DeepFeature generates robust features related to the model decision-making. It locates the weak features within these features that fail to be transformed by the DNN. They are the main culprits of vulnerability. DeepFeature selects diverse samples containing weak features for adversarial retraining. Our evaluation shows that DeepFeature significantly improves overall robustness, average improved by 77.83%, and individual robustness, average improved by 42.81‰, of the models in adversarial testing. Compared with coverage and uncertainty criteria, these two performances are improved by 3.93% and 15.00% in DeepFeature, respectively. The positive correlation coefficient between DeepFeature and improved robustness can achieve 0.858, and the <span><math><mi>p</mi></math></span>-value is 0.001.</p></div>","PeriodicalId":51099,"journal":{"name":"Journal of Systems and Software","volume":"219 ","pages":"Article 112201"},"PeriodicalIF":3.7000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DeepFeature: Guiding adversarial testing for deep neural network systems using robust features\",\"authors\":\"Lichao Feng , Xingya Wang , Shiyu Zhang , Zhihong Zhao\",\"doi\":\"10.1016/j.jss.2024.112201\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the deployment of Deep Neural Network (DNN) systems in security-critical fields, more and more researchers are concerned about DNN robustness. Unfortunately, DNNs are vulnerable to adversarial attacks and produce completely wrong outputs. This inspired numerous testing works devoted to improving the adversarial robustness of DNNs. Coverage and uncertainty criteria were proposed to guide sample selections for DNN retraining. However, they are greatly limited to evaluating DNN abnormal behaviors rather than locating the root cause of adversarial vulnerability. This work aims to bridge this gap. We propose an adversarial testing framework, DeepFeature, using robust features. DeepFeature generates robust features related to the model decision-making. It locates the weak features within these features that fail to be transformed by the DNN. They are the main culprits of vulnerability. DeepFeature selects diverse samples containing weak features for adversarial retraining. Our evaluation shows that DeepFeature significantly improves overall robustness, average improved by 77.83%, and individual robustness, average improved by 42.81‰, of the models in adversarial testing. Compared with coverage and uncertainty criteria, these two performances are improved by 3.93% and 15.00% in DeepFeature, respectively. The positive correlation coefficient between DeepFeature and improved robustness can achieve 0.858, and the <span><math><mi>p</mi></math></span>-value is 0.001.</p></div>\",\"PeriodicalId\":51099,\"journal\":{\"name\":\"Journal of Systems and Software\",\"volume\":\"219 \",\"pages\":\"Article 112201\"},\"PeriodicalIF\":3.7000,\"publicationDate\":\"2024-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Systems and Software\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0164121224002450\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Systems and Software","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0164121224002450","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
DeepFeature: Guiding adversarial testing for deep neural network systems using robust features
With the deployment of Deep Neural Network (DNN) systems in security-critical fields, more and more researchers are concerned about DNN robustness. Unfortunately, DNNs are vulnerable to adversarial attacks and produce completely wrong outputs. This inspired numerous testing works devoted to improving the adversarial robustness of DNNs. Coverage and uncertainty criteria were proposed to guide sample selections for DNN retraining. However, they are greatly limited to evaluating DNN abnormal behaviors rather than locating the root cause of adversarial vulnerability. This work aims to bridge this gap. We propose an adversarial testing framework, DeepFeature, using robust features. DeepFeature generates robust features related to the model decision-making. It locates the weak features within these features that fail to be transformed by the DNN. They are the main culprits of vulnerability. DeepFeature selects diverse samples containing weak features for adversarial retraining. Our evaluation shows that DeepFeature significantly improves overall robustness, average improved by 77.83%, and individual robustness, average improved by 42.81‰, of the models in adversarial testing. Compared with coverage and uncertainty criteria, these two performances are improved by 3.93% and 15.00% in DeepFeature, respectively. The positive correlation coefficient between DeepFeature and improved robustness can achieve 0.858, and the -value is 0.001.
期刊介绍:
The Journal of Systems and Software publishes papers covering all aspects of software engineering and related hardware-software-systems issues. All articles should include a validation of the idea presented, e.g. through case studies, experiments, or systematic comparisons with other approaches already in practice. Topics of interest include, but are not limited to:
•Methods and tools for, and empirical studies on, software requirements, design, architecture, verification and validation, maintenance and evolution
•Agile, model-driven, service-oriented, open source and global software development
•Approaches for mobile, multiprocessing, real-time, distributed, cloud-based, dependable and virtualized systems
•Human factors and management concerns of software development
•Data management and big data issues of software systems
•Metrics and evaluation, data mining of software development resources
•Business and economic aspects of software development processes
The journal welcomes state-of-the-art surveys and reports of practical experience for all of these topics.