Yuhao Wei, Song Huang, Yu Wang, Ruilin Liu, Chunyan Xia
{"title":"基于突变检测的dnn安全性检测及改进","authors":"Yuhao Wei, Song Huang, Yu Wang, Ruilin Liu, Chunyan Xia","doi":"10.1109/QRS57517.2022.00087","DOIUrl":null,"url":null,"abstract":"In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.","PeriodicalId":143812,"journal":{"name":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Mutation Testing based Safety Testing and Improving on DNNs\",\"authors\":\"Yuhao Wei, Song Huang, Yu Wang, Ruilin Liu, Chunyan Xia\",\"doi\":\"10.1109/QRS57517.2022.00087\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.\",\"PeriodicalId\":143812,\"journal\":{\"name\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"volume\":\"24 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/QRS57517.2022.00087\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 22nd International Conference on Software Quality, Reliability and Security (QRS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/QRS57517.2022.00087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Mutation Testing based Safety Testing and Improving on DNNs
In recent years, deep neural networks (DNNs) have made great progress in people’s daily life since it becomes easier for data accessing and labeling. However, DNN has been proven to behave uncertainly, especially when facing small perturbations in their input data, which becomes a limitation for its application in self-driving and other safety-critical fields. Those human-made attacks like adversarial attacks would cause extremely serious consequences. In this work, we design and evaluate a safety testing method for DNNs based on mutation testing, and propose an adversarial training method based on testing results and joint optimization. First, we conduct an adversarial mutation on the test datasets and measure the performance of models in response to the adversarial samples by mutation scores. Next, we evaluate the validity of mutation scores as a quantitative indicator of safety by comparing DNN models and their updated versions. Finally, we construct a joint optimization problem with safety scores for adversarial training, thus improving the safety of the model as well as the generalizability of the defense capability.