Hong Quy Nguyen , Thong Hoang , Hoa Khanh Dam , Aditya Ghose
{"title":"基于图形的可解释漏洞预测","authors":"Hong Quy Nguyen , Thong Hoang , Hoa Khanh Dam , Aditya Ghose","doi":"10.1016/j.infsof.2024.107566","DOIUrl":null,"url":null,"abstract":"<div><p>Significant increases in cyberattacks worldwide have threatened the security of organizations, businesses, and individuals. Cyberattacks exploit vulnerabilities in software systems. Recent work has leveraged powerful and complex models, such as deep neural networks, to improve the predictive performance of vulnerability detection models. However, these models are often regarded as “black box” models, making it challenging for software practitioners to understand and interpret their predictions. This lack of explainability has resulted in a reluctance to adopt or deploy these vulnerability prediction models in industry applications. This paper proposes a novel approach, <strong>G</strong>enetic <strong>A</strong>lgorithm-based <strong>Vul</strong>nerability Prediction <strong>Explainer</strong>, (herein GAVulExplainer), which generates explanations for vulnerability prediction models based on graph neural networks. GAVulExplainer leverages genetic algorithms to construct a subgraph explanation that represents the crucial factor contributing to the vulnerability. Experimental results show that our proposed approach outperforms baselines in providing concrete reasons for a vulnerability prediction.</p></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"177 ","pages":"Article 107566"},"PeriodicalIF":3.8000,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S095058492400171X/pdfft?md5=51c2432186d2a7513da1bb84a4daf260&pid=1-s2.0-S095058492400171X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Graph-based explainable vulnerability prediction\",\"authors\":\"Hong Quy Nguyen , Thong Hoang , Hoa Khanh Dam , Aditya Ghose\",\"doi\":\"10.1016/j.infsof.2024.107566\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Significant increases in cyberattacks worldwide have threatened the security of organizations, businesses, and individuals. Cyberattacks exploit vulnerabilities in software systems. Recent work has leveraged powerful and complex models, such as deep neural networks, to improve the predictive performance of vulnerability detection models. However, these models are often regarded as “black box” models, making it challenging for software practitioners to understand and interpret their predictions. This lack of explainability has resulted in a reluctance to adopt or deploy these vulnerability prediction models in industry applications. This paper proposes a novel approach, <strong>G</strong>enetic <strong>A</strong>lgorithm-based <strong>Vul</strong>nerability Prediction <strong>Explainer</strong>, (herein GAVulExplainer), which generates explanations for vulnerability prediction models based on graph neural networks. GAVulExplainer leverages genetic algorithms to construct a subgraph explanation that represents the crucial factor contributing to the vulnerability. Experimental results show that our proposed approach outperforms baselines in providing concrete reasons for a vulnerability prediction.</p></div>\",\"PeriodicalId\":54983,\"journal\":{\"name\":\"Information and Software Technology\",\"volume\":\"177 \",\"pages\":\"Article 107566\"},\"PeriodicalIF\":3.8000,\"publicationDate\":\"2024-08-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S095058492400171X/pdfft?md5=51c2432186d2a7513da1bb84a4daf260&pid=1-s2.0-S095058492400171X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information and Software Technology\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S095058492400171X\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095058492400171X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Significant increases in cyberattacks worldwide have threatened the security of organizations, businesses, and individuals. Cyberattacks exploit vulnerabilities in software systems. Recent work has leveraged powerful and complex models, such as deep neural networks, to improve the predictive performance of vulnerability detection models. However, these models are often regarded as “black box” models, making it challenging for software practitioners to understand and interpret their predictions. This lack of explainability has resulted in a reluctance to adopt or deploy these vulnerability prediction models in industry applications. This paper proposes a novel approach, Genetic Algorithm-based Vulnerability Prediction Explainer, (herein GAVulExplainer), which generates explanations for vulnerability prediction models based on graph neural networks. GAVulExplainer leverages genetic algorithms to construct a subgraph explanation that represents the crucial factor contributing to the vulnerability. Experimental results show that our proposed approach outperforms baselines in providing concrete reasons for a vulnerability prediction.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.