{"title":"基于开源注入的漏洞代码数据集生成框架","authors":"Shasha Zhang","doi":"10.1109/ICAICA52286.2021.9497888","DOIUrl":null,"url":null,"abstract":"Evaluation benchmark plays an important role in the design of defect detection algorithms and tools. Especially with the development of deep learning techniques, code defect detection models based on deep neural network requires a large number of training and testing cases. Existing test cases are far from meeting the requirements of new algorithm design and verification. On the one hand, the number of test cases designed manually or collected from open source projects is small. On the other hand, the test cases generated automatically according to rules have similar pattern, high redundancy and simple structure. This paper proposes an algorithm of code defect injection and test case generation based on open source projects. The basic idea is to find reaching definitions in open source projects, and modify the source code according to the analysis results, so as to generate defect dataset with a large number of test cases that have similar feature to open source codes. This paper selects 8 open source projects to verify the proposed method and generates more than 6,000 null pointer dereference test cases in total. We use existing tools to evaluate the injected test cases and the results show that the proposed method can generate a large number of high-quality test cases.","PeriodicalId":121979,"journal":{"name":"2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A Framework of Vulnerable Code Dataset Generation by Open-Source Injection\",\"authors\":\"Shasha Zhang\",\"doi\":\"10.1109/ICAICA52286.2021.9497888\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Evaluation benchmark plays an important role in the design of defect detection algorithms and tools. Especially with the development of deep learning techniques, code defect detection models based on deep neural network requires a large number of training and testing cases. Existing test cases are far from meeting the requirements of new algorithm design and verification. On the one hand, the number of test cases designed manually or collected from open source projects is small. On the other hand, the test cases generated automatically according to rules have similar pattern, high redundancy and simple structure. This paper proposes an algorithm of code defect injection and test case generation based on open source projects. The basic idea is to find reaching definitions in open source projects, and modify the source code according to the analysis results, so as to generate defect dataset with a large number of test cases that have similar feature to open source codes. This paper selects 8 open source projects to verify the proposed method and generates more than 6,000 null pointer dereference test cases in total. We use existing tools to evaluate the injected test cases and the results show that the proposed method can generate a large number of high-quality test cases.\",\"PeriodicalId\":121979,\"journal\":{\"name\":\"2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)\",\"volume\":\"126 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-06-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAICA52286.2021.9497888\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAICA52286.2021.9497888","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Framework of Vulnerable Code Dataset Generation by Open-Source Injection
Evaluation benchmark plays an important role in the design of defect detection algorithms and tools. Especially with the development of deep learning techniques, code defect detection models based on deep neural network requires a large number of training and testing cases. Existing test cases are far from meeting the requirements of new algorithm design and verification. On the one hand, the number of test cases designed manually or collected from open source projects is small. On the other hand, the test cases generated automatically according to rules have similar pattern, high redundancy and simple structure. This paper proposes an algorithm of code defect injection and test case generation based on open source projects. The basic idea is to find reaching definitions in open source projects, and modify the source code according to the analysis results, so as to generate defect dataset with a large number of test cases that have similar feature to open source codes. This paper selects 8 open source projects to verify the proposed method and generates more than 6,000 null pointer dereference test cases in total. We use existing tools to evaluate the injected test cases and the results show that the proposed method can generate a large number of high-quality test cases.