{"title":"利用 LLM 生成 API 参数安全规则,用于 API 滥用检测","authors":"Jinghua Liu, Yi Yang, Kai Chen, Miaoqian Lin","doi":"arxiv-2409.09288","DOIUrl":null,"url":null,"abstract":"In this paper, we present a new framework, named GPTAid, for automatic APSRs\ngeneration by analyzing API source code with LLM and detecting API misuse\ncaused by incorrect parameter use. To validate the correctness of the\nLLM-generated APSRs, we propose an execution feedback-checking approach based\non the observation that security-critical API misuse is often caused by APSRs\nviolations, and most of them result in runtime errors. Specifically, GPTAid\nfirst uses LLM to generate raw APSRs and the Right calling code, and then\ngenerates Violation code for each raw APSR by modifying the Right calling code\nusing LLM. Subsequently, GPTAid performs dynamic execution on each piece of\nViolation code and further filters out the incorrect APSRs based on runtime\nerrors. To further generate concrete APSRs, GPTAid employs a code differential\nanalysis to refine the filtered ones. Particularly, as the programming language\nis more precise than natural language, GPTAid identifies the key operations\nwithin Violation code by differential analysis, and then generates the\ncorresponding concrete APSR based on the aforementioned operations. These\nconcrete APSRs could be precisely interpreted into applicable detection code,\nwhich proven to be effective in API misuse detection. Implementing on the\ndataset containing 200 randomly selected APIs from eight popular libraries,\nGPTAid achieves a precision of 92.3%. Moreover, it generates 6 times more APSRs\nthan state-of-the-art detectors on a comparison dataset of previously reported\nbugs and APSRs. We further evaluated GPTAid on 47 applications, 210 unknown\nsecurity bugs were found potentially resulting in severe security issues (e.g.,\nsystem crashes), 150 of which have been confirmed by developers after our\nreports.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generating API Parameter Security Rules with LLM for API Misuse Detection\",\"authors\":\"Jinghua Liu, Yi Yang, Kai Chen, Miaoqian Lin\",\"doi\":\"arxiv-2409.09288\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we present a new framework, named GPTAid, for automatic APSRs\\ngeneration by analyzing API source code with LLM and detecting API misuse\\ncaused by incorrect parameter use. To validate the correctness of the\\nLLM-generated APSRs, we propose an execution feedback-checking approach based\\non the observation that security-critical API misuse is often caused by APSRs\\nviolations, and most of them result in runtime errors. Specifically, GPTAid\\nfirst uses LLM to generate raw APSRs and the Right calling code, and then\\ngenerates Violation code for each raw APSR by modifying the Right calling code\\nusing LLM. Subsequently, GPTAid performs dynamic execution on each piece of\\nViolation code and further filters out the incorrect APSRs based on runtime\\nerrors. To further generate concrete APSRs, GPTAid employs a code differential\\nanalysis to refine the filtered ones. Particularly, as the programming language\\nis more precise than natural language, GPTAid identifies the key operations\\nwithin Violation code by differential analysis, and then generates the\\ncorresponding concrete APSR based on the aforementioned operations. These\\nconcrete APSRs could be precisely interpreted into applicable detection code,\\nwhich proven to be effective in API misuse detection. Implementing on the\\ndataset containing 200 randomly selected APIs from eight popular libraries,\\nGPTAid achieves a precision of 92.3%. Moreover, it generates 6 times more APSRs\\nthan state-of-the-art detectors on a comparison dataset of previously reported\\nbugs and APSRs. We further evaluated GPTAid on 47 applications, 210 unknown\\nsecurity bugs were found potentially resulting in severe security issues (e.g.,\\nsystem crashes), 150 of which have been confirmed by developers after our\\nreports.\",\"PeriodicalId\":501278,\"journal\":{\"name\":\"arXiv - CS - Software Engineering\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.09288\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.09288","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generating API Parameter Security Rules with LLM for API Misuse Detection
In this paper, we present a new framework, named GPTAid, for automatic APSRs
generation by analyzing API source code with LLM and detecting API misuse
caused by incorrect parameter use. To validate the correctness of the
LLM-generated APSRs, we propose an execution feedback-checking approach based
on the observation that security-critical API misuse is often caused by APSRs
violations, and most of them result in runtime errors. Specifically, GPTAid
first uses LLM to generate raw APSRs and the Right calling code, and then
generates Violation code for each raw APSR by modifying the Right calling code
using LLM. Subsequently, GPTAid performs dynamic execution on each piece of
Violation code and further filters out the incorrect APSRs based on runtime
errors. To further generate concrete APSRs, GPTAid employs a code differential
analysis to refine the filtered ones. Particularly, as the programming language
is more precise than natural language, GPTAid identifies the key operations
within Violation code by differential analysis, and then generates the
corresponding concrete APSR based on the aforementioned operations. These
concrete APSRs could be precisely interpreted into applicable detection code,
which proven to be effective in API misuse detection. Implementing on the
dataset containing 200 randomly selected APIs from eight popular libraries,
GPTAid achieves a precision of 92.3%. Moreover, it generates 6 times more APSRs
than state-of-the-art detectors on a comparison dataset of previously reported
bugs and APSRs. We further evaluated GPTAid on 47 applications, 210 unknown
security bugs were found potentially resulting in severe security issues (e.g.,
system crashes), 150 of which have been confirmed by developers after our
reports.