{"title":"Comparing Automated Reuse of Scripted Tests and Model-Based Tests for Configurable Software","authors":"Stefan Fischer, R. Ramler, L. Linsbauer","doi":"10.1109/APSEC53868.2021.00049","DOIUrl":null,"url":null,"abstract":"Highly configurable software gives developers more flexibility to meet different customer requirements and enables users to better tailor software to their needs. However, variability causes higher complexity in software and complicates many development processes, such as testing. One major challenge for testing of configurable software is adjusting tests to fit different configurations, which often has to be done manually. In our previous work, we evaluated the use of an automated reuse technique to support the reuse of existing tests for new configurations. Research on automated reuse of model variants and on applying model-based testing to configurable software encouraged us to also evaluate the automated reuse of model-based test variants. The goal is to investigate differences in applying automated reuse to the different testing paradigms. Our evaluation provides evidence for the usefulness of automated reuse for both testing paradigms. Nonetheless we found some differences in the robustness of tests to small inaccuracies of the reuse approach.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSEC53868.2021.00049","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Highly configurable software gives developers more flexibility to meet different customer requirements and enables users to better tailor software to their needs. However, variability causes higher complexity in software and complicates many development processes, such as testing. One major challenge for testing of configurable software is adjusting tests to fit different configurations, which often has to be done manually. In our previous work, we evaluated the use of an automated reuse technique to support the reuse of existing tests for new configurations. Research on automated reuse of model variants and on applying model-based testing to configurable software encouraged us to also evaluate the automated reuse of model-based test variants. The goal is to investigate differences in applying automated reuse to the different testing paradigms. Our evaluation provides evidence for the usefulness of automated reuse for both testing paradigms. Nonetheless we found some differences in the robustness of tests to small inaccuracies of the reuse approach.