Sana Qadir, Eman Waheed, Aisha Khanum, Seema Jehan
{"title":"对Web应用程序的有效安全测试的方法和工具进行比较评估。","authors":"Sana Qadir, Eman Waheed, Aisha Khanum, Seema Jehan","doi":"10.7717/peerj-cs.2821","DOIUrl":null,"url":null,"abstract":"<p><p>It is generally accepted that adopting both static application security testing (SAST) and dynamic application security testing (DAST) approaches is vital for thorough and effective security testing. However, this suggestion has not been comprehensively evaluated, especially with regard to the individual risk categories mentioned in Open Web Application Security Project (OWASP) Top 10:2021 and common weakness enumeration (CWE) Top 25:2023 lists. Also, it is rare to find any evidence-based recommendations for effective tools for detecting vulnerabilities from a specific risk category or severity level. These shortcomings increase both the time and cost of systematic security testing when its need is heightened by increasingly frequent and preventable incidents. This study aims to fill these gaps by empirically testing seventy-five real-world Web applications using four SAST and five DAST tools. Only popular, free, and open-source tools were selected and each Web application was scanned using these nine tools. From the report generated by these tools, we considered two parameters to measure effectiveness: count and severity of the vulnerability found. We also mapped the vulnerabilities to OWASP Top 10:2021 and CWE Top 25:2023 lists. Our results show that using only DAST tools is the preferred option for four OWASP Top 10:2021 risk categories while using only SAST tools is preferred for only three risk categories. Either approach is effective for two of the OWASP Top 10:2021 risk categories. For CWE Top 25:2023 list, all three approaches were equally effective and found vulnerabilities belonging to three risk categories each. We also found that none of the tools were able to detect any vulnerability in one OWASP Top 10:2021 risk category and in eight CWE Top 25:2023 categories. This highlights a critical limitation of popular tools. The most effective DAST tool was OWASP Zed Attack Proxy (ZAP), especially for detecting vulnerabilities in broken access control, insecure design, and security misconfiguration risk categories. Yasca was the best-performing SAST tool, and outperformed all other tools at finding high-severity vulnerabilities. For medium-severity and low-severity levels, the DAST tools Iron Web application Advanced Security testing Platform (WASP) and Vega performed better than all the other tools. These findings reveal key insights, such as, the superiority of DAST tools for detecting certain types of vulnerabilities and the indispensability of SAST tools for detecting high-severity issues (due to detailed static code analysis). This study also addresses significant limitations in previous research by testing multiple real-world Web applications across diverse domains (technology, health, and education), enhancing generalization of the findings. Unlike studies that rely primarily on proprietary tools, our use of open-source SAST and DAST tools ensures better reproducibility and accessibility for organizations with limited budget.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2821"},"PeriodicalIF":3.5000,"publicationDate":"2025-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190248/pdf/","citationCount":"0","resultStr":"{\"title\":\"Comparative evaluation of approaches & tools for effective security testing of Web applications.\",\"authors\":\"Sana Qadir, Eman Waheed, Aisha Khanum, Seema Jehan\",\"doi\":\"10.7717/peerj-cs.2821\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>It is generally accepted that adopting both static application security testing (SAST) and dynamic application security testing (DAST) approaches is vital for thorough and effective security testing. However, this suggestion has not been comprehensively evaluated, especially with regard to the individual risk categories mentioned in Open Web Application Security Project (OWASP) Top 10:2021 and common weakness enumeration (CWE) Top 25:2023 lists. Also, it is rare to find any evidence-based recommendations for effective tools for detecting vulnerabilities from a specific risk category or severity level. These shortcomings increase both the time and cost of systematic security testing when its need is heightened by increasingly frequent and preventable incidents. This study aims to fill these gaps by empirically testing seventy-five real-world Web applications using four SAST and five DAST tools. Only popular, free, and open-source tools were selected and each Web application was scanned using these nine tools. From the report generated by these tools, we considered two parameters to measure effectiveness: count and severity of the vulnerability found. We also mapped the vulnerabilities to OWASP Top 10:2021 and CWE Top 25:2023 lists. Our results show that using only DAST tools is the preferred option for four OWASP Top 10:2021 risk categories while using only SAST tools is preferred for only three risk categories. Either approach is effective for two of the OWASP Top 10:2021 risk categories. For CWE Top 25:2023 list, all three approaches were equally effective and found vulnerabilities belonging to three risk categories each. We also found that none of the tools were able to detect any vulnerability in one OWASP Top 10:2021 risk category and in eight CWE Top 25:2023 categories. This highlights a critical limitation of popular tools. The most effective DAST tool was OWASP Zed Attack Proxy (ZAP), especially for detecting vulnerabilities in broken access control, insecure design, and security misconfiguration risk categories. Yasca was the best-performing SAST tool, and outperformed all other tools at finding high-severity vulnerabilities. For medium-severity and low-severity levels, the DAST tools Iron Web application Advanced Security testing Platform (WASP) and Vega performed better than all the other tools. These findings reveal key insights, such as, the superiority of DAST tools for detecting certain types of vulnerabilities and the indispensability of SAST tools for detecting high-severity issues (due to detailed static code analysis). This study also addresses significant limitations in previous research by testing multiple real-world Web applications across diverse domains (technology, health, and education), enhancing generalization of the findings. Unlike studies that rely primarily on proprietary tools, our use of open-source SAST and DAST tools ensures better reproducibility and accessibility for organizations with limited budget.</p>\",\"PeriodicalId\":54224,\"journal\":{\"name\":\"PeerJ Computer Science\",\"volume\":\"11 \",\"pages\":\"e2821\"},\"PeriodicalIF\":3.5000,\"publicationDate\":\"2025-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12190248/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"PeerJ Computer Science\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.7717/peerj-cs.2821\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2821","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
摘要
人们普遍认为,采用静态应用程序安全测试(SAST)和动态应用程序安全测试(DAST)方法对于全面有效的安全测试至关重要。然而,这一建议并没有得到全面的评估,特别是关于开放Web应用程序安全项目(OWASP) Top 10:21 1和常见弱点枚举(CWE) Top 25:2023列表中提到的单个风险类别。此外,很少找到基于证据的有效工具来检测来自特定风险类别或严重性级别的漏洞。这些缺点增加了系统安全测试的时间和成本,当系统安全测试的需求被日益频繁和可预防的事件所提高时。本研究旨在通过使用4个SAST和5个DAST工具对75个真实的Web应用程序进行经验测试来填补这些空白。只选择流行的、免费的和开源的工具,并使用这9个工具扫描每个Web应用程序。从这些工具生成的报告中,我们考虑了两个参数来衡量有效性:发现的漏洞的数量和严重程度。我们还将漏洞映射到OWASP Top 10:21 21和CWE Top 25:2023列表中。我们的结果表明,仅使用DAST工具是四个OWASP Top 10:21 21风险类别的首选选项,而仅使用SAST工具是仅三个风险类别的首选选项。任何一种方法都对OWASP Top 10:21 21风险类别中的两个有效。对于CWE Top 25:2023列表,所有三种方法都同样有效,并且每种方法都发现了属于三个风险类别的漏洞。我们还发现,没有任何工具能够检测到OWASP Top 10:21 1风险类别和CWE Top 25:2023 8风险类别中的任何漏洞。这突出了流行工具的一个关键限制。最有效的DAST工具是OWASP Zed Attack Proxy (ZAP),特别是用于检测损坏的访问控制、不安全的设计和安全错误配置风险类别中的漏洞。Yasca是性能最好的SAST工具,在发现高严重性漏洞方面优于所有其他工具。对于中严重性和低严重性级别,DAST工具Iron Web应用程序高级安全测试平台(Advanced Security testing Platform, WASP)和Vega的性能优于所有其他工具。这些发现揭示了关键的见解,例如,用于检测某些类型漏洞的SAST工具的优越性以及用于检测高严重性问题的SAST工具的不可或缺性(由于详细的静态代码分析)。本研究还通过测试跨不同领域(技术、健康和教育)的多个真实Web应用程序,解决了先前研究中的重大局限性,增强了研究结果的泛化性。与主要依赖于专有工具的研究不同,我们使用开源SAST和DAST工具确保了预算有限的组织更好的再现性和可访问性。
Comparative evaluation of approaches & tools for effective security testing of Web applications.
It is generally accepted that adopting both static application security testing (SAST) and dynamic application security testing (DAST) approaches is vital for thorough and effective security testing. However, this suggestion has not been comprehensively evaluated, especially with regard to the individual risk categories mentioned in Open Web Application Security Project (OWASP) Top 10:2021 and common weakness enumeration (CWE) Top 25:2023 lists. Also, it is rare to find any evidence-based recommendations for effective tools for detecting vulnerabilities from a specific risk category or severity level. These shortcomings increase both the time and cost of systematic security testing when its need is heightened by increasingly frequent and preventable incidents. This study aims to fill these gaps by empirically testing seventy-five real-world Web applications using four SAST and five DAST tools. Only popular, free, and open-source tools were selected and each Web application was scanned using these nine tools. From the report generated by these tools, we considered two parameters to measure effectiveness: count and severity of the vulnerability found. We also mapped the vulnerabilities to OWASP Top 10:2021 and CWE Top 25:2023 lists. Our results show that using only DAST tools is the preferred option for four OWASP Top 10:2021 risk categories while using only SAST tools is preferred for only three risk categories. Either approach is effective for two of the OWASP Top 10:2021 risk categories. For CWE Top 25:2023 list, all three approaches were equally effective and found vulnerabilities belonging to three risk categories each. We also found that none of the tools were able to detect any vulnerability in one OWASP Top 10:2021 risk category and in eight CWE Top 25:2023 categories. This highlights a critical limitation of popular tools. The most effective DAST tool was OWASP Zed Attack Proxy (ZAP), especially for detecting vulnerabilities in broken access control, insecure design, and security misconfiguration risk categories. Yasca was the best-performing SAST tool, and outperformed all other tools at finding high-severity vulnerabilities. For medium-severity and low-severity levels, the DAST tools Iron Web application Advanced Security testing Platform (WASP) and Vega performed better than all the other tools. These findings reveal key insights, such as, the superiority of DAST tools for detecting certain types of vulnerabilities and the indispensability of SAST tools for detecting high-severity issues (due to detailed static code analysis). This study also addresses significant limitations in previous research by testing multiple real-world Web applications across diverse domains (technology, health, and education), enhancing generalization of the findings. Unlike studies that rely primarily on proprietary tools, our use of open-source SAST and DAST tools ensures better reproducibility and accessibility for organizations with limited budget.
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.