Saahil Ognawala, R. Amato, A. Pretschner, Pooja Kulkarni
{"title":"自动评估由组合分析发现的漏洞","authors":"Saahil Ognawala, R. Amato, A. Pretschner, Pooja Kulkarni","doi":"10.1145/3243127.3243130","DOIUrl":null,"url":null,"abstract":"Testing is the most widely employed method to find vulnerabilities in real-world software programs. Compositional analysis, based on symbolic execution, is an automated testing method to find vulnerabilities in medium- to large-scale programs consisting of many interacting components. However, existing compositional analysis frameworks do not assess the severity of reported vulnerabilities. In this paper, we present a framework to analyze vulnerabilities discovered by an existing compositional analysis tool and assign CVSS3 (Common Vulnerability Scoring System v3.0) scores to them, based on various heuristics such as interaction with related components, ease of reachability, complexity of design and likelihood of accepting unsanitized input. By analyzing vulnerabilities reported with CVSS3 scores in the past, we train simple machine learning models. By presenting our interactive framework to developers of popular open-source software and other security experts, we gather feedback on our trained models and further improve the features to increase the accuracy of our predictions. By providing qualitative (based on community feedback) and quantitative (based on prediction accuracy) evidence from 21 open-source programs, we show that our severity prediction framework can effectively assist developers with assessing vulnerabilities.","PeriodicalId":244058,"journal":{"name":"Proceedings of the 1st International Workshop on Machine Learning and Software Engineering in Symbiosis","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Automatically assessing vulnerabilities discovered by compositional analysis\",\"authors\":\"Saahil Ognawala, R. Amato, A. Pretschner, Pooja Kulkarni\",\"doi\":\"10.1145/3243127.3243130\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Testing is the most widely employed method to find vulnerabilities in real-world software programs. Compositional analysis, based on symbolic execution, is an automated testing method to find vulnerabilities in medium- to large-scale programs consisting of many interacting components. However, existing compositional analysis frameworks do not assess the severity of reported vulnerabilities. In this paper, we present a framework to analyze vulnerabilities discovered by an existing compositional analysis tool and assign CVSS3 (Common Vulnerability Scoring System v3.0) scores to them, based on various heuristics such as interaction with related components, ease of reachability, complexity of design and likelihood of accepting unsanitized input. By analyzing vulnerabilities reported with CVSS3 scores in the past, we train simple machine learning models. By presenting our interactive framework to developers of popular open-source software and other security experts, we gather feedback on our trained models and further improve the features to increase the accuracy of our predictions. By providing qualitative (based on community feedback) and quantitative (based on prediction accuracy) evidence from 21 open-source programs, we show that our severity prediction framework can effectively assist developers with assessing vulnerabilities.\",\"PeriodicalId\":244058,\"journal\":{\"name\":\"Proceedings of the 1st International Workshop on Machine Learning and Software Engineering in Symbiosis\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-07-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 1st International Workshop on Machine Learning and Software Engineering in Symbiosis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3243127.3243130\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 1st International Workshop on Machine Learning and Software Engineering in Symbiosis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3243127.3243130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
摘要
测试是在实际软件程序中发现漏洞的最广泛使用的方法。组合分析是一种基于符号执行的自动化测试方法,用于在由许多相互作用的组件组成的中大型程序中发现漏洞。然而,现有的组合分析框架并没有评估所报告的漏洞的严重性。在本文中,我们提出了一个框架来分析现有组合分析工具发现的漏洞,并根据与相关组件的交互、可达性、设计复杂性和接受未经处理的输入的可能性等各种启发式方法,为它们分配CVSS3 (Common Vulnerability Scoring System v3.0)分数。通过分析过去使用CVSS3分数报告的漏洞,我们训练了简单的机器学习模型。通过向流行的开源软件开发人员和其他安全专家展示我们的交互式框架,我们收集了关于我们训练模型的反馈,并进一步改进了功能,以提高我们预测的准确性。通过提供来自21个开源程序的定性(基于社区反馈)和定量(基于预测准确性)证据,我们表明我们的严重性预测框架可以有效地帮助开发人员评估漏洞。
Automatically assessing vulnerabilities discovered by compositional analysis
Testing is the most widely employed method to find vulnerabilities in real-world software programs. Compositional analysis, based on symbolic execution, is an automated testing method to find vulnerabilities in medium- to large-scale programs consisting of many interacting components. However, existing compositional analysis frameworks do not assess the severity of reported vulnerabilities. In this paper, we present a framework to analyze vulnerabilities discovered by an existing compositional analysis tool and assign CVSS3 (Common Vulnerability Scoring System v3.0) scores to them, based on various heuristics such as interaction with related components, ease of reachability, complexity of design and likelihood of accepting unsanitized input. By analyzing vulnerabilities reported with CVSS3 scores in the past, we train simple machine learning models. By presenting our interactive framework to developers of popular open-source software and other security experts, we gather feedback on our trained models and further improve the features to increase the accuracy of our predictions. By providing qualitative (based on community feedback) and quantitative (based on prediction accuracy) evidence from 21 open-source programs, we show that our severity prediction framework can effectively assist developers with assessing vulnerabilities.