Jesus Friginal, D. Andrés, Juan-Carlos Ruiz-Garcia, Regina L. O. Moraes
{"title":"Using Dependability Benchmarks to Support ISO/IEC SQuaRE","authors":"Jesus Friginal, D. Andrés, Juan-Carlos Ruiz-Garcia, Regina L. O. Moraes","doi":"10.1109/PRDC.2011.13","DOIUrl":null,"url":null,"abstract":"The integration of Commercial-Off-The-Shelf (COTS) components in software has reduced time-to-market and production costs, but selecting the most suitable component, among those available, remains still a challenging task. This selection process, typically named benchmarking, requires evaluating the behaviour of eligible components in operation, and ranking them attending to quality characteristics. Most existing benchmarks only provide measures characterising the behaviour of software systems in absence of faults ignoring the hard impact that both accidental and malicious faults have on software quality. However, since using COTS to build a system may motivate the emergence of dependability issues due to the interaction between components, benchmarking the system in presence of faults is essential. The recent ISO/IEC 25045 standard copes with this lack by considering accidental faults when assessing the recoverability capabilities of software systems. This paper proposes a dependability benchmarking approach to determine the impact that faults (noted as disturbances in the standard) either accidental or malicious may have on the quality features exhibited by software components. As will be shown, the usefulness of the approach embraces all evaluator profiles (developers, acquirers and third-party evaluators) identified in the ISO/IEC 25000 \"SQuaRE\" standard. The feasibility of the proposal is finally illustrated through the benchmarking of three distinct software components, which implement the OLSR protocol specification, competing for integration in a wireless mesh network.","PeriodicalId":254760,"journal":{"name":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"14","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 17th Pacific Rim International Symposium on Dependable Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/PRDC.2011.13","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 14
Abstract
The integration of Commercial-Off-The-Shelf (COTS) components in software has reduced time-to-market and production costs, but selecting the most suitable component, among those available, remains still a challenging task. This selection process, typically named benchmarking, requires evaluating the behaviour of eligible components in operation, and ranking them attending to quality characteristics. Most existing benchmarks only provide measures characterising the behaviour of software systems in absence of faults ignoring the hard impact that both accidental and malicious faults have on software quality. However, since using COTS to build a system may motivate the emergence of dependability issues due to the interaction between components, benchmarking the system in presence of faults is essential. The recent ISO/IEC 25045 standard copes with this lack by considering accidental faults when assessing the recoverability capabilities of software systems. This paper proposes a dependability benchmarking approach to determine the impact that faults (noted as disturbances in the standard) either accidental or malicious may have on the quality features exhibited by software components. As will be shown, the usefulness of the approach embraces all evaluator profiles (developers, acquirers and third-party evaluators) identified in the ISO/IEC 25000 "SQuaRE" standard. The feasibility of the proposal is finally illustrated through the benchmarking of three distinct software components, which implement the OLSR protocol specification, competing for integration in a wireless mesh network.