Khiem Ton, Nhi Nguyen, Mahmoud Nazzal, Abdallah Khreishah, Cristian Borcea, NhatHai Phan, Ruoming Jin, Issa Khalil, Yelong Shen
{"title":"Demo: SGCode: A Flexible Prompt-Optimizing System for Secure Generation of Code","authors":"Khiem Ton, Nhi Nguyen, Mahmoud Nazzal, Abdallah Khreishah, Cristian Borcea, NhatHai Phan, Ruoming Jin, Issa Khalil, Yelong Shen","doi":"arxiv-2409.07368","DOIUrl":null,"url":null,"abstract":"This paper introduces SGCode, a flexible prompt-optimizing system to generate\nsecure code with large language models (LLMs). SGCode integrates recent\nprompt-optimization approaches with LLMs in a unified system accessible through\nfront-end and back-end APIs, enabling users to 1) generate secure code, which\nis free of vulnerabilities, 2) review and share security analysis, and 3)\neasily switch from one prompt optimization approach to another, while providing\ninsights on model and system performance. We populated SGCode on an AWS server\nwith PromSec, an approach that optimizes prompts by combining an LLM and\nsecurity tools with a lightweight generative adversarial graph neural network\nto detect and fix security vulnerabilities in the generated code. Extensive\nexperiments show that SGCode is practical as a public tool to gain insights\ninto the trade-offs between model utility, secure code generation, and system\ncost. SGCode has only a marginal cost compared with prompting LLMs. SGCode is\navailable at: http://3.131.141.63:8501/.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"5 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Cryptography and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07368","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
This paper introduces SGCode, a flexible prompt-optimizing system to generate
secure code with large language models (LLMs). SGCode integrates recent
prompt-optimization approaches with LLMs in a unified system accessible through
front-end and back-end APIs, enabling users to 1) generate secure code, which
is free of vulnerabilities, 2) review and share security analysis, and 3)
easily switch from one prompt optimization approach to another, while providing
insights on model and system performance. We populated SGCode on an AWS server
with PromSec, an approach that optimizes prompts by combining an LLM and
security tools with a lightweight generative adversarial graph neural network
to detect and fix security vulnerabilities in the generated code. Extensive
experiments show that SGCode is practical as a public tool to gain insights
into the trade-offs between model utility, secure code generation, and system
cost. SGCode has only a marginal cost compared with prompting LLMs. SGCode is
available at: http://3.131.141.63:8501/.