{"title":"深度代码模型鲁棒性攻击调查","authors":"Yubin Qu, Song Huang, Yongming Yao","doi":"10.1007/s10515-024-00464-7","DOIUrl":null,"url":null,"abstract":"<div><p>With the widespread application of deep learning in software engineering, deep code models have played an important role in improving code quality and development efficiency, promoting the intelligence and industrialization of software engineering. In recent years, the fragility of deep code models has been constantly exposed, with various attack methods emerging against deep code models and robustness attacks being a new attack paradigm. Adversarial samples after model deployment are generated to evade the predictions of deep code models, making robustness attacks a hot research direction. Therefore, to provide a comprehensive survey of robustness attacks on deep code models and their implications, this paper comprehensively analyzes the robustness attack methods in deep code models. Firstly, it analyzes the differences between robustness attacks and other attack paradigms, defines basic attack methods and processes, and then summarizes robustness attacks’ threat model, evaluation metrics, attack settings, etc. Furthermore, existing attack methods are classified from multiple dimensions, such as attacker knowledge and attack scenarios. In addition, common tasks, datasets, and deep learning models in robustness attack research are also summarized, introducing beneficial applications of robustness attacks in data augmentation, adversarial training, etc., and finally, looking forward to future key research directions.</p></div>","PeriodicalId":55414,"journal":{"name":"Automated Software Engineering","volume":"31 2","pages":""},"PeriodicalIF":2.0000,"publicationDate":"2024-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A survey on robustness attacks for deep code models\",\"authors\":\"Yubin Qu, Song Huang, Yongming Yao\",\"doi\":\"10.1007/s10515-024-00464-7\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>With the widespread application of deep learning in software engineering, deep code models have played an important role in improving code quality and development efficiency, promoting the intelligence and industrialization of software engineering. In recent years, the fragility of deep code models has been constantly exposed, with various attack methods emerging against deep code models and robustness attacks being a new attack paradigm. Adversarial samples after model deployment are generated to evade the predictions of deep code models, making robustness attacks a hot research direction. Therefore, to provide a comprehensive survey of robustness attacks on deep code models and their implications, this paper comprehensively analyzes the robustness attack methods in deep code models. Firstly, it analyzes the differences between robustness attacks and other attack paradigms, defines basic attack methods and processes, and then summarizes robustness attacks’ threat model, evaluation metrics, attack settings, etc. Furthermore, existing attack methods are classified from multiple dimensions, such as attacker knowledge and attack scenarios. In addition, common tasks, datasets, and deep learning models in robustness attack research are also summarized, introducing beneficial applications of robustness attacks in data augmentation, adversarial training, etc., and finally, looking forward to future key research directions.</p></div>\",\"PeriodicalId\":55414,\"journal\":{\"name\":\"Automated Software Engineering\",\"volume\":\"31 2\",\"pages\":\"\"},\"PeriodicalIF\":2.0000,\"publicationDate\":\"2024-08-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Automated Software Engineering\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s10515-024-00464-7\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Automated Software Engineering","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10515-024-00464-7","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
A survey on robustness attacks for deep code models
With the widespread application of deep learning in software engineering, deep code models have played an important role in improving code quality and development efficiency, promoting the intelligence and industrialization of software engineering. In recent years, the fragility of deep code models has been constantly exposed, with various attack methods emerging against deep code models and robustness attacks being a new attack paradigm. Adversarial samples after model deployment are generated to evade the predictions of deep code models, making robustness attacks a hot research direction. Therefore, to provide a comprehensive survey of robustness attacks on deep code models and their implications, this paper comprehensively analyzes the robustness attack methods in deep code models. Firstly, it analyzes the differences between robustness attacks and other attack paradigms, defines basic attack methods and processes, and then summarizes robustness attacks’ threat model, evaluation metrics, attack settings, etc. Furthermore, existing attack methods are classified from multiple dimensions, such as attacker knowledge and attack scenarios. In addition, common tasks, datasets, and deep learning models in robustness attack research are also summarized, introducing beneficial applications of robustness attacks in data augmentation, adversarial training, etc., and finally, looking forward to future key research directions.
期刊介绍:
This journal details research, tutorial papers, survey and accounts of significant industrial experience in the foundations, techniques, tools and applications of automated software engineering technology. This includes the study of techniques for constructing, understanding, adapting, and modeling software artifacts and processes.
Coverage in Automated Software Engineering examines both automatic systems and collaborative systems as well as computational models of human software engineering activities. In addition, it presents knowledge representations and artificial intelligence techniques applicable to automated software engineering, and formal techniques that support or provide theoretical foundations. The journal also includes reviews of books, software, conferences and workshops.