Selecting Secure Web Applications Using Trustworthiness Benchmarking

Afonso Araújo Neto, M. Vieira
{"title":"Selecting Secure Web Applications Using Trustworthiness Benchmarking","authors":"Afonso Araújo Neto, M. Vieira","doi":"10.4018/jdtis.2011040101","DOIUrl":null,"url":null,"abstract":"The multiplicity of existing software and component alternatives for web applications, especially in open source communities, has boosted interest in suitable benchmarks, able to assist in the selection of candidate solutions, concerning several quality attributes. However, the huge success of performance and dependability benchmarking contrasts the small advances in security benchmarking. Traditional vulnerability/attack detection techniques can hardly be used alone to benchmark security, as security depends on hidden vulnerabilities and subtle properties of the system and its environment. A comprehensive security benchmarking process should consist of a two-step process: elimination of flawed alternatives followed by trustworthiness benchmarking. In this paper, the authors propose a trustworthiness benchmark based on the systematic collection of evidences that can be used to select one among several web applications, from a security point-of-view. They evaluate this benchmark approach by comparing its results with an evaluation conducted by a group of security experts and programmers. Results show that the proposed benchmark provides security rankings similar to those provided by human experts. In fact, although experts may take days to gather the information and rank the alternative web applications, the benchmark consistently provides similar results in a matter of few minutes. DOI: 10.4018/jdtis.2011040101 2 International Journal of Dependable and Trustworthy Information Systems, 2(2), 1-16, April-June 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. tics (e.g., performance, availability, security) (Gray, 1993). Computer industry holds a reputed infrastructure for performance evaluation, where the Transaction Processing Performance Council (TPC) (http://www.tpc.org) benchmarks are recognized as one of the most successful benchmarking initiatives of the overall computer industry. Furthermore, the concept of dependability benchmarking has gained ground in the last few years, having already led to the proposal of dependability benchmarks for operating systems, web servers, databases and transactional systems in general (Kanoun & Spainhower, 2005). Security, however, has been largely absent from previous efforts, in a clear disparity to performance and dependability. Theoretically, a security benchmark would provide a metric (or small set of metrics) able to characterize the degree to which security goals are met in the system under testing (Payne, 2006), allowing developers and administrators to compare alternatives and make informed decisions. No clear methodology to accomplish this has been proposed so far. Traditional security metrics are hard to define and compute (Torgerson, 2007), as they involve making isolated estimations about the ability of an unknown individual (e.g., a hacker) to discover and maliciously exploit an unknown system characteristic (e.g., a vulnerability). While techniques to find, correct and prevent actual vulnerabilities flourish in the research community (Zanero, Carettoni, & Zanchetta, 2005), the lack of accurate and representative security metrics makes the conception of security benchmarking an extremely difficult task (Bondavalli, 2009). An alternative way to tackle this problem is to look for metrics that systematize and summarize the trustworthiness that can be justifiably put in a system or application. Instead of quantifying absolute security factors, trust-based metrics are grounded on the idea of quantifying the evidences available regarding the trustworthiness that one can put in the assessed application. However, as trust does not necessarily provide guarantees, security benchmarking can only be accomplished as a twofold process, with trustworthiness being the metric used for selecting among non-obviously flawed alternatives. In other words, a reliable benchmarking approach should provide a set of security guarantees by forcing the systems under evaluation to pass a set of basic security assessments before considering the trustworthiness aspect to support the final selection (e.g., in a web application benchmarking campaign, no application should present actual vulnerabilities detectable during testing; the ones that do not present vulnerabilities are then ranked using a process like the one proposed in this paper). Trust-based metrics allow characterizing “the degree to which security goals are met in the given system or component” by summarizing the amount of protection that it has in terms of security mechanisms, processes, configurations, procedures and behaviors. In the web context, these metrics can be actually used in several scenarios, including: • Comparing the trustworthiness of alternative web applications. This is extremely useful for system administrators when selecting, from a set of alternative solutions that implement the same high-level requirements, the one that offers more guarantees in terms of security (based on the security evidences available). • Comparing the trustworthiness of alternative software components. In a development environment, this is relevant for developers to select the most trustworthy software components to be integrated in an application, especially when considering component-based development (Crnkovic, Chaudron, & Larsson, 2006). • Redirecting the web application development effort. By comparing the software components developed in a project, it is possible to identify the ones that require more attention (e.g., testing and rework) in terms of security. This allows effectively managing the development effort and is, obviously, of utmost importance in the context of large complex projects. 14 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/selecting-secure-webapplications-using/65519?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2","PeriodicalId":298071,"journal":{"name":"Int. J. Dependable Trust. Inf. Syst.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Int. J. Dependable Trust. Inf. Syst.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/jdtis.2011040101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13

Abstract

The multiplicity of existing software and component alternatives for web applications, especially in open source communities, has boosted interest in suitable benchmarks, able to assist in the selection of candidate solutions, concerning several quality attributes. However, the huge success of performance and dependability benchmarking contrasts the small advances in security benchmarking. Traditional vulnerability/attack detection techniques can hardly be used alone to benchmark security, as security depends on hidden vulnerabilities and subtle properties of the system and its environment. A comprehensive security benchmarking process should consist of a two-step process: elimination of flawed alternatives followed by trustworthiness benchmarking. In this paper, the authors propose a trustworthiness benchmark based on the systematic collection of evidences that can be used to select one among several web applications, from a security point-of-view. They evaluate this benchmark approach by comparing its results with an evaluation conducted by a group of security experts and programmers. Results show that the proposed benchmark provides security rankings similar to those provided by human experts. In fact, although experts may take days to gather the information and rank the alternative web applications, the benchmark consistently provides similar results in a matter of few minutes. DOI: 10.4018/jdtis.2011040101 2 International Journal of Dependable and Trustworthy Information Systems, 2(2), 1-16, April-June 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. tics (e.g., performance, availability, security) (Gray, 1993). Computer industry holds a reputed infrastructure for performance evaluation, where the Transaction Processing Performance Council (TPC) (http://www.tpc.org) benchmarks are recognized as one of the most successful benchmarking initiatives of the overall computer industry. Furthermore, the concept of dependability benchmarking has gained ground in the last few years, having already led to the proposal of dependability benchmarks for operating systems, web servers, databases and transactional systems in general (Kanoun & Spainhower, 2005). Security, however, has been largely absent from previous efforts, in a clear disparity to performance and dependability. Theoretically, a security benchmark would provide a metric (or small set of metrics) able to characterize the degree to which security goals are met in the system under testing (Payne, 2006), allowing developers and administrators to compare alternatives and make informed decisions. No clear methodology to accomplish this has been proposed so far. Traditional security metrics are hard to define and compute (Torgerson, 2007), as they involve making isolated estimations about the ability of an unknown individual (e.g., a hacker) to discover and maliciously exploit an unknown system characteristic (e.g., a vulnerability). While techniques to find, correct and prevent actual vulnerabilities flourish in the research community (Zanero, Carettoni, & Zanchetta, 2005), the lack of accurate and representative security metrics makes the conception of security benchmarking an extremely difficult task (Bondavalli, 2009). An alternative way to tackle this problem is to look for metrics that systematize and summarize the trustworthiness that can be justifiably put in a system or application. Instead of quantifying absolute security factors, trust-based metrics are grounded on the idea of quantifying the evidences available regarding the trustworthiness that one can put in the assessed application. However, as trust does not necessarily provide guarantees, security benchmarking can only be accomplished as a twofold process, with trustworthiness being the metric used for selecting among non-obviously flawed alternatives. In other words, a reliable benchmarking approach should provide a set of security guarantees by forcing the systems under evaluation to pass a set of basic security assessments before considering the trustworthiness aspect to support the final selection (e.g., in a web application benchmarking campaign, no application should present actual vulnerabilities detectable during testing; the ones that do not present vulnerabilities are then ranked using a process like the one proposed in this paper). Trust-based metrics allow characterizing “the degree to which security goals are met in the given system or component” by summarizing the amount of protection that it has in terms of security mechanisms, processes, configurations, procedures and behaviors. In the web context, these metrics can be actually used in several scenarios, including: • Comparing the trustworthiness of alternative web applications. This is extremely useful for system administrators when selecting, from a set of alternative solutions that implement the same high-level requirements, the one that offers more guarantees in terms of security (based on the security evidences available). • Comparing the trustworthiness of alternative software components. In a development environment, this is relevant for developers to select the most trustworthy software components to be integrated in an application, especially when considering component-based development (Crnkovic, Chaudron, & Larsson, 2006). • Redirecting the web application development effort. By comparing the software components developed in a project, it is possible to identify the ones that require more attention (e.g., testing and rework) in terms of security. This allows effectively managing the development effort and is, obviously, of utmost importance in the context of large complex projects. 14 more pages are available in the full version of this document, which may be purchased using the "Add to Cart" button on the product's webpage: www.igi-global.com/article/selecting-secure-webapplications-using/65519?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Computer Science, Security, and Information Technology. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2
使用可信赖基准选择安全的Web应用程序
web应用程序的现有软件和组件替代方案的多样性,特别是在开源社区,提高了对合适的基准的兴趣,能够帮助选择候选解决方案,涉及几个质量属性。然而,性能和可靠性基准测试的巨大成功与安全性基准测试的微小进步形成了对比。传统的漏洞/攻击检测技术很难单独用于安全性基准测试,因为安全性依赖于隐藏的漏洞和系统及其环境的微妙属性。全面的安全性基准测试过程应该包括两个步骤:消除有缺陷的替代方案,然后进行可信度基准测试。在本文中,作者提出了一个基于系统证据收集的可信度基准,从安全的角度来看,该基准可用于在多个web应用程序中选择一个。他们通过将基准方法的结果与一组安全专家和程序员进行的评估进行比较,来评估这种基准方法。结果表明,提出的基准提供了类似于人类专家提供的安全排名。事实上,尽管专家可能需要几天的时间来收集信息并对可选的web应用程序进行排名,但基准测试始终在几分钟内提供类似的结果。DOI: 10.4018 / jdtis。2011040101 2国际可靠可信信息系统学报,2(2),1- 16,2011年4 - 6月版权所有©2011,IGI Global。未经IGI Global书面许可,禁止以印刷或电子形式复制或分发。tics(例如,性能、可用性、安全性)(Gray, 1993)。计算机行业拥有一个著名的性能评估基础设施,其中事务处理性能委员会(TPC) (http://www.tpc.org)基准测试被认为是整个计算机行业中最成功的基准测试计划之一。此外,可靠性基准测试的概念在过去几年中已经取得了进展,已经导致了对操作系统、web服务器、数据库和一般事务系统的可靠性基准测试的建议(Kanoun & Spainhower, 2005)。然而,在以前的工作中,安全性在很大程度上是缺失的,在性能和可靠性方面存在明显的差异。理论上,安全基准将提供一个度量标准(或一组度量标准),能够描述在测试系统中满足安全目标的程度(Payne, 2006),允许开发人员和管理员比较备选方案并做出明智的决策。迄今为止,还没有提出明确的方法来实现这一目标。传统的安全度量很难定义和计算(Torgerson, 2007),因为它们涉及对未知个体(例如,黑客)发现和恶意利用未知系统特征(例如,漏洞)的能力进行孤立估计。虽然发现、纠正和预防实际漏洞的技术在研究界蓬勃发展(Zanero, Carettoni, & Zanchetta, 2005),但缺乏准确和具有代表性的安全指标使得安全基准的概念成为一项极其困难的任务(Bondavalli, 2009)。解决此问题的另一种方法是寻找能够系统化并总结可合理地放入系统或应用程序中的可信度的度量。基于信任的度量不是量化绝对安全因素,而是基于量化可用于评估应用程序的可信度证据的想法。然而,由于信任并不一定提供保证,安全性基准测试只能作为一个双重过程来完成,可信性是用于在没有明显缺陷的备选方案中进行选择的度量。换句话说,可靠的基准测试方法应提供一套安全保证,即在考虑可信度方面以支持最终选择之前,强制接受评估的系统通过一套基本的安全评估(例如,在web应用程序基准测试活动中,任何应用程序都不应在测试期间出现可检测到的实际漏洞;然后使用类似本文中提出的流程对不存在漏洞的代码进行排序。基于信任的度量允许通过总结其在安全机制、过程、配置、过程和行为方面所具有的保护数量来描述“在给定系统或组件中满足安全目标的程度”。在web环境中,这些指标实际上可以在几个场景中使用,包括:•比较替代web应用程序的可信度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信