Risk-Based Testing of Self-Adaptive Systems Using Run-Time Predictions

André Reichstaller, Alexander Knapp
{"title":"Risk-Based Testing of Self-Adaptive Systems Using Run-Time Predictions","authors":"André Reichstaller, Alexander Knapp","doi":"10.1109/SASO.2018.00019","DOIUrl":null,"url":null,"abstract":"Devising test strategies for specific test goals relies on predictions of the run-time behavior of the software system under test (SuT) based on specifications, models, or the code. For a system following a single strategy as run-time behavior, the test strategy can be fixed at design time. For an adaptive system, which may choose from several strategies due to environment changes, a combination of test strategies has to be found, which still can be achieved at design time provided that all system strategies and the switching policy are predictable. Self-adaptive systems, also adapting their system strategies and strategy switches according to the environmental dynamics, render such design-time predictions futile, but there also the test strategies have to be adapted at run time. We characterize the necessary interplay between system strategy adaptation of the SuT and test strategy adaptation of the tester as a Stochastic Game. We argue that the tester's part, formalized by means of a Markov Decision Process, can be automatically solved by the use of Reinforcement Learning methods where we discuss both model-based and model-free variants. Finally, we propose a particular framework inspired by Direct Future Prediction which, given a simulation of the SuT and its environment, autonomously finds good test strategies w.r.t. imposed quanti?able goals. While these goals, in general, can be initialized arbitrarily, our evaluation concentrates on risk-based goals rewarding the detection of hazardous failures.","PeriodicalId":405522,"journal":{"name":"2018 IEEE 12th International Conference on Self-Adaptive and Self-Organizing Systems (SASO)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 12th International Conference on Self-Adaptive and Self-Organizing Systems (SASO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SASO.2018.00019","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 11

Abstract

Devising test strategies for specific test goals relies on predictions of the run-time behavior of the software system under test (SuT) based on specifications, models, or the code. For a system following a single strategy as run-time behavior, the test strategy can be fixed at design time. For an adaptive system, which may choose from several strategies due to environment changes, a combination of test strategies has to be found, which still can be achieved at design time provided that all system strategies and the switching policy are predictable. Self-adaptive systems, also adapting their system strategies and strategy switches according to the environmental dynamics, render such design-time predictions futile, but there also the test strategies have to be adapted at run time. We characterize the necessary interplay between system strategy adaptation of the SuT and test strategy adaptation of the tester as a Stochastic Game. We argue that the tester's part, formalized by means of a Markov Decision Process, can be automatically solved by the use of Reinforcement Learning methods where we discuss both model-based and model-free variants. Finally, we propose a particular framework inspired by Direct Future Prediction which, given a simulation of the SuT and its environment, autonomously finds good test strategies w.r.t. imposed quanti?able goals. While these goals, in general, can be initialized arbitrarily, our evaluation concentrates on risk-based goals rewarding the detection of hazardous failures.
基于风险的自适应系统运行时预测测试
为特定的测试目标设计测试策略依赖于基于规范、模型或代码的被测软件系统(SuT)运行时行为的预测。对于遵循单一策略作为运行时行为的系统,测试策略可以在设计时固定。对于一个自适应系统,由于环境的变化,它可能从几个策略中选择,必须找到一个测试策略的组合,这仍然可以在设计时实现,只要所有的系统策略和切换策略都是可预测的。自适应系统,也根据环境动态调整它们的系统策略和策略切换,使得这种设计时的预测无效,但是测试策略也必须在运行时进行调整。我们将SuT的系统策略适应和测试者的测试策略适应之间的必要相互作用描述为随机博弈。我们认为,通过马尔可夫决策过程形式化的测试人员部分可以通过使用强化学习方法自动解决,其中我们讨论了基于模型和无模型的变体。最后,我们提出了一个受直接未来预测启发的特殊框架,该框架给出了SuT及其环境的模拟,可以自主地找到良好的测试策略。能力目标。虽然这些目标通常可以任意初始化,但我们的评估集中在基于风险的目标上,奖励危险故障的检测。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信