Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad
{"title":"AutoSafeCoder:通过静态分析和模糊测试确保 LLM 代码生成安全的多代理框架","authors":"Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad","doi":"arxiv-2409.10737","DOIUrl":null,"url":null,"abstract":"Recent advancements in automatic code generation using large language models\n(LLMs) have brought us closer to fully automated secure software development.\nHowever, existing approaches often rely on a single agent for code generation,\nwhich struggles to produce secure, vulnerability-free code. Traditional program\nsynthesis with LLMs has primarily focused on functional correctness, often\nneglecting critical dynamic security implications that happen during runtime.\nTo address these challenges, we propose AutoSafeCoder, a multi-agent framework\nthat leverages LLM-driven agents for code generation, vulnerability analysis,\nand security enhancement through continuous collaboration. The framework\nconsists of three agents: a Coding Agent responsible for code generation, a\nStatic Analyzer Agent identifying vulnerabilities, and a Fuzzing Agent\nperforming dynamic testing using a mutation-based fuzzing approach to detect\nruntime errors. Our contribution focuses on ensuring the safety of multi-agent\ncode generation by integrating dynamic and static testing in an iterative\nprocess during code generation by LLM that improves security. Experiments using\nthe SecurityEval dataset demonstrate a 13% reduction in code vulnerabilities\ncompared to baseline LLMs, with no compromise in functionality.","PeriodicalId":501278,"journal":{"name":"arXiv - CS - Software Engineering","volume":"99 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz Testing\",\"authors\":\"Ana Nunez, Nafis Tanveer Islam, Sumit Kumar Jha, Peyman Najafirad\",\"doi\":\"arxiv-2409.10737\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent advancements in automatic code generation using large language models\\n(LLMs) have brought us closer to fully automated secure software development.\\nHowever, existing approaches often rely on a single agent for code generation,\\nwhich struggles to produce secure, vulnerability-free code. Traditional program\\nsynthesis with LLMs has primarily focused on functional correctness, often\\nneglecting critical dynamic security implications that happen during runtime.\\nTo address these challenges, we propose AutoSafeCoder, a multi-agent framework\\nthat leverages LLM-driven agents for code generation, vulnerability analysis,\\nand security enhancement through continuous collaboration. The framework\\nconsists of three agents: a Coding Agent responsible for code generation, a\\nStatic Analyzer Agent identifying vulnerabilities, and a Fuzzing Agent\\nperforming dynamic testing using a mutation-based fuzzing approach to detect\\nruntime errors. Our contribution focuses on ensuring the safety of multi-agent\\ncode generation by integrating dynamic and static testing in an iterative\\nprocess during code generation by LLM that improves security. Experiments using\\nthe SecurityEval dataset demonstrate a 13% reduction in code vulnerabilities\\ncompared to baseline LLMs, with no compromise in functionality.\",\"PeriodicalId\":501278,\"journal\":{\"name\":\"arXiv - CS - Software Engineering\",\"volume\":\"99 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.10737\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.10737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
AutoSafeCoder: A Multi-Agent Framework for Securing LLM Code Generation through Static Analysis and Fuzz Testing
Recent advancements in automatic code generation using large language models
(LLMs) have brought us closer to fully automated secure software development.
However, existing approaches often rely on a single agent for code generation,
which struggles to produce secure, vulnerability-free code. Traditional program
synthesis with LLMs has primarily focused on functional correctness, often
neglecting critical dynamic security implications that happen during runtime.
To address these challenges, we propose AutoSafeCoder, a multi-agent framework
that leverages LLM-driven agents for code generation, vulnerability analysis,
and security enhancement through continuous collaboration. The framework
consists of three agents: a Coding Agent responsible for code generation, a
Static Analyzer Agent identifying vulnerabilities, and a Fuzzing Agent
performing dynamic testing using a mutation-based fuzzing approach to detect
runtime errors. Our contribution focuses on ensuring the safety of multi-agent
code generation by integrating dynamic and static testing in an iterative
process during code generation by LLM that improves security. Experiments using
the SecurityEval dataset demonstrate a 13% reduction in code vulnerabilities
compared to baseline LLMs, with no compromise in functionality.