不确定软件系统的基于模型的假设检验

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING
Matteo Camilli, A. Gargantini, P. Scandurra
{"title":"不确定软件系统的基于模型的假设检验","authors":"Matteo Camilli, A. Gargantini, P. Scandurra","doi":"10.1002/stvr.1730","DOIUrl":null,"url":null,"abstract":"Nowadays, there exists an increasing demand for reliable software systems able to fulfill their requirements in different operational environments and to cope with uncertainty that can be introduced both at design‐time and at runtime because of the lack of control over third‐party system components and complex interactions among software, hardware infrastructures and physical phenomena. This article addresses the problem of the discrepancy between measured data at runtime and the design‐time formal specification by using an inverse uncertainty quantification approach. Namely, we introduce a methodology called METRIC and its supporting toolchain to quantify and mitigate software system uncertainty during testing by combining (on‐the‐fly) model‐based testing and Bayesian inference. Our approach connects probabilistic input/output conformance theory with statistical hypothesis testing in order to assess if the behaviour of the system under test corresponds to its probabilistic formal specification provided in terms of a Markov decision process. An uncertainty‐aware model‐based test case generation strategy is used as a means to collect evidence from software components affected by sources of uncertainty. Test results serve as input to a Bayesian inference process that updates beliefs on model parameters encoding uncertain quality attributes of the system under test. This article describes our approach from both theoretical and practical perspectives. An extensive empirical evaluation activity has been conducted in order to assess the cost‐effectiveness of our approach. We show that, under same effort constraints, our uncertainty‐aware testing strategy increases the accuracy of the uncertainty quantification process up to 50 times with respect to traditional model‐based testing methods.","PeriodicalId":49506,"journal":{"name":"Software Testing Verification & Reliability","volume":"8 1","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2020-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"Model‐based hypothesis testing of uncertain software systems\",\"authors\":\"Matteo Camilli, A. Gargantini, P. Scandurra\",\"doi\":\"10.1002/stvr.1730\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, there exists an increasing demand for reliable software systems able to fulfill their requirements in different operational environments and to cope with uncertainty that can be introduced both at design‐time and at runtime because of the lack of control over third‐party system components and complex interactions among software, hardware infrastructures and physical phenomena. This article addresses the problem of the discrepancy between measured data at runtime and the design‐time formal specification by using an inverse uncertainty quantification approach. Namely, we introduce a methodology called METRIC and its supporting toolchain to quantify and mitigate software system uncertainty during testing by combining (on‐the‐fly) model‐based testing and Bayesian inference. Our approach connects probabilistic input/output conformance theory with statistical hypothesis testing in order to assess if the behaviour of the system under test corresponds to its probabilistic formal specification provided in terms of a Markov decision process. An uncertainty‐aware model‐based test case generation strategy is used as a means to collect evidence from software components affected by sources of uncertainty. Test results serve as input to a Bayesian inference process that updates beliefs on model parameters encoding uncertain quality attributes of the system under test. This article describes our approach from both theoretical and practical perspectives. An extensive empirical evaluation activity has been conducted in order to assess the cost‐effectiveness of our approach. We show that, under same effort constraints, our uncertainty‐aware testing strategy increases the accuracy of the uncertainty quantification process up to 50 times with respect to traditional model‐based testing methods.\",\"PeriodicalId\":49506,\"journal\":{\"name\":\"Software Testing Verification & Reliability\",\"volume\":\"8 1\",\"pages\":\"\"},\"PeriodicalIF\":1.5000,\"publicationDate\":\"2020-02-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Software Testing Verification & Reliability\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1002/stvr.1730\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, SOFTWARE ENGINEERING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software Testing Verification & Reliability","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1002/stvr.1730","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 12

摘要

如今,对可靠的软件系统的需求日益增加,这些系统能够在不同的操作环境中满足其需求,并应对由于缺乏对第三方系统组件的控制以及软件、硬件基础设施和物理现象之间复杂的相互作用而在设计时和运行时引入的不确定性。本文通过使用逆不确定性量化方法解决了运行时测量数据与设计时正式规范之间的差异问题。也就是说,我们引入了一种名为METRIC的方法及其支持工具链,通过结合(实时)基于模型的测试和贝叶斯推理,量化和减轻测试过程中的软件系统不确定性。我们的方法将概率输入/输出一致性理论与统计假设检验联系起来,以评估被测系统的行为是否符合根据马尔可夫决策过程提供的概率形式规范。一种基于不确定性感知模型的测试用例生成策略被用作从受不确定性源影响的软件组件中收集证据的手段。测试结果作为贝叶斯推理过程的输入,该过程更新对编码被测系统不确定质量属性的模型参数的信念。本文从理论和实践两方面阐述了我们的方法。为了评估我们的方法的成本效益,进行了广泛的实证评估活动。我们表明,在相同的努力约束下,我们的不确定感知测试策略与传统的基于模型的测试方法相比,不确定量化过程的准确性提高了50倍。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Model‐based hypothesis testing of uncertain software systems
Nowadays, there exists an increasing demand for reliable software systems able to fulfill their requirements in different operational environments and to cope with uncertainty that can be introduced both at design‐time and at runtime because of the lack of control over third‐party system components and complex interactions among software, hardware infrastructures and physical phenomena. This article addresses the problem of the discrepancy between measured data at runtime and the design‐time formal specification by using an inverse uncertainty quantification approach. Namely, we introduce a methodology called METRIC and its supporting toolchain to quantify and mitigate software system uncertainty during testing by combining (on‐the‐fly) model‐based testing and Bayesian inference. Our approach connects probabilistic input/output conformance theory with statistical hypothesis testing in order to assess if the behaviour of the system under test corresponds to its probabilistic formal specification provided in terms of a Markov decision process. An uncertainty‐aware model‐based test case generation strategy is used as a means to collect evidence from software components affected by sources of uncertainty. Test results serve as input to a Bayesian inference process that updates beliefs on model parameters encoding uncertain quality attributes of the system under test. This article describes our approach from both theoretical and practical perspectives. An extensive empirical evaluation activity has been conducted in order to assess the cost‐effectiveness of our approach. We show that, under same effort constraints, our uncertainty‐aware testing strategy increases the accuracy of the uncertainty quantification process up to 50 times with respect to traditional model‐based testing methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Software Testing Verification & Reliability
Software Testing Verification & Reliability 工程技术-计算机:软件工程
CiteScore
3.70
自引率
0.00%
发文量
34
审稿时长
>12 weeks
期刊介绍: The journal is the premier outlet for research results on the subjects of testing, verification and reliability. Readers will find useful research on issues pertaining to building better software and evaluating it. The journal is unique in its emphasis on theoretical foundations and applications to real-world software development. The balance of theory, empirical work, and practical applications provide readers with better techniques for testing, verifying and improving the reliability of software. The journal targets researchers, practitioners, educators and students that have a vested interest in results generated by high-quality testing, verification and reliability modeling and evaluation of software. Topics of special interest include, but are not limited to: -New criteria for software testing and verification -Application of existing software testing and verification techniques to new types of software, including web applications, web services, embedded software, aspect-oriented software, and software architectures -Model based testing -Formal verification techniques such as model-checking -Comparison of testing and verification techniques -Measurement of and metrics for testing, verification and reliability -Industrial experience with cutting edge techniques -Descriptions and evaluations of commercial and open-source software testing tools -Reliability modeling, measurement and application -Testing and verification of software security -Automated test data generation -Process issues and methods -Non-functional testing
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信