{"title":"Trustworthiness Benchmarking of Web Applications Using Static Code Analysis","authors":"Afonso Araújo Neto, M. Vieira","doi":"10.1109/ARES.2011.37","DOIUrl":null,"url":null,"abstract":"Benchmarking the security of web applications is complex and, although there are many proposals of metrics, no consensual quantitative security metric has been proposed so far. Static analysis is an effective approach for detecting vulnerabilities, but the complexity of applications and the large variety of vulnerabilities prevent any single tool from being foolproof. In this application paper we investigate the hypothesis of combining the output of multiple static code analyzers to define metrics for comparing the trustworthiness of web applications. Various experiments, including a benchmarking campaign over seven distinct open source web forums, show that the raw number of vulnerabilities reported by a set of tools allows rough trustworthiness comparison. We also study the use of normalization and false positive rate estimation to calibrate the output of each tool. Results show that calibration allows computing a very accurate metric that can be used to easily and automatically compare different applications.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 Sixth International Conference on Availability, Reliability and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ARES.2011.37","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Benchmarking the security of web applications is complex and, although there are many proposals of metrics, no consensual quantitative security metric has been proposed so far. Static analysis is an effective approach for detecting vulnerabilities, but the complexity of applications and the large variety of vulnerabilities prevent any single tool from being foolproof. In this application paper we investigate the hypothesis of combining the output of multiple static code analyzers to define metrics for comparing the trustworthiness of web applications. Various experiments, including a benchmarking campaign over seven distinct open source web forums, show that the raw number of vulnerabilities reported by a set of tools allows rough trustworthiness comparison. We also study the use of normalization and false positive rate estimation to calibrate the output of each tool. Results show that calibration allows computing a very accurate metric that can be used to easily and automatically compare different applications.