{"title":"A Metric Framework for the Gamification of Web and Mobile GUI Testing","authors":"Filippo Cacciotto, Tommaso Fulcini, Riccardo Coppola, Luca Ardito","doi":"10.1109/ICSTW52544.2021.00032","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00032","url":null,"abstract":"System testing through the Graphical User Interface (GUI) is a valuable form of Verification & Validation for modern applications, especially in graphically-intensive domains like web and mobile applications. However, the practice is often overlooked by developers mostly because of its costly nature and the absence of immediate feedback about the quality of test sequence. This paper describes a proposal for the Gamification of exploratory GUI testing. We define - in a tool and domain- agnostic way - the basic concepts, a set of metrics, a scoring scheme and visual feedbacks to enable a gamified approach to the practice; we finally discuss the potential implications and envision a roadmap for the evaluation of the approach.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128922919","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-company Consumer Product Software Test Architecture Industry Experience Report","authors":"J. Hagar","doi":"10.1109/ICSTW52544.2021.00036","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00036","url":null,"abstract":"A case study is presented for a consumer hardware company which added software into the product line with little process-based test planning, including not having a software test architecture (STA). The paper presents the problem, solution, and limitations of the case study as well as future work.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130259414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online GANs for Automatic Performance Testing","authors":"Ivan Porres, Hergys Rexha, S. Lafond","doi":"10.1109/ICSTW52544.2021.00027","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00027","url":null,"abstract":"In this paper we present a novel algorithm for automatic performance testing that uses an online variant of the Generative Adversarial Network (GAN) to optimize the test generation process. The objective of the proposed approach is to generate, for a given test budget, a test suite containing a high number of tests revealing performance defects. This is achieved using a GAN to generate the tests and predict their outcome. This GAN is trained online while generating and executing the tests. The proposed approach does not require a prior training set or model of the system under test. We provide an initial evaluation the algorithm using an example test system, and compare the obtained results with other possible approaches.We consider that the presented algorithm serves as a proof of concept and we hope that it can spark a research discussion on the application of GANs to test generation.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120944693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sten Vercammen, S. Demeyer, Markus Borg, Robbe Claessens
{"title":"Flaky Mutants; Another Concern for Mutation Testing","authors":"Sten Vercammen, S. Demeyer, Markus Borg, Robbe Claessens","doi":"10.1109/ICSTW52544.2021.00054","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00054","url":null,"abstract":"Software testing is the dominant method for quality assurance and control in software engineering [1] , [2] . Test suites serve as quality gates to safeguard against programming faults. But not every test suite is written equally. We usually gauge its quality using metrics such as code coverage. These assess how much of the code base has been covered. However, they do not tell if the tests actually test and verify the intentions. Mutation testing does this by deliberately injecting faults into the system under test and verifying how many of them the test suite can detect. For every injected fault that is not detected by the test suite, an additional test should be written. In the academic community, mutation testing is acknowledged as the most promising technique for automated assessment of the strength of a test suite [3] , [4] .","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121285293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An environment for benchmarking combinatorial test suite generators","authors":"A. Bombarda, Edoardo Crippa, A. Gargantini","doi":"10.1109/ICSTW52544.2021.00021","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00021","url":null,"abstract":"New tools for combinatorial test generation are proposed every year. However, different generators may have different performances on different models, in terms of the number of tests produced and generation time, so the choice of which generator has to be used can be challenging. Classical comparison between CIT generators considers only the number of tests composing the test suite. Still, especially when the time dedicated to testing activity is limited, generation time can be determinant. Thus, we propose a benchmarking framework including 1) a set of generic benchmark models, 2) an interface to easily integrate new generators, 3) methods to benchmark each generator against the others and to check validity and completeness. We have tested the proposed environment using five different generators (ACTS, CAgen, CASA, Medici, and PICT), comparing the obtained results in terms of the number of test cases and generation times, errors, completeness, and validity. Finally, we propose a CIT competition, between combinatorial generators, based on our framework.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126023089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Prasetya, Samira Shirzadehhajimahmood, Saba Gholizadeh Ansari, Pedro M. Fernandes, R. Prada
{"title":"An Agent-based Architecture for AI-Enhanced Automated Testing for XR Systems, a Short Paper","authors":"I. Prasetya, Samira Shirzadehhajimahmood, Saba Gholizadeh Ansari, Pedro M. Fernandes, R. Prada","doi":"10.1109/ICSTW52544.2021.00044","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00044","url":null,"abstract":"This short paper presents an architectural overview of an agent-based framework called iv4XR for automated testing that is currently under development by an H2020 project with the same name. The framework’s intended main use case of is testing the family of Extended Reality (XR) based systems (e.g. 3D games, VR sytems, AR systems), though the approach can indeed be adapted to target other types of interactive systems. The framework is unique in that it is an agent-based system. Agents are inherently reactive, and therefore are arguably a natural match to deal with interactive systems. Moreover, it is also a natural vessel for mounting and combining different AI capabilities, e.g. reasoning, navigation, and learning.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132945358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Boosted Exploratory Test Architecture: Coaching Test Engineers with Word Similarity","authors":"Y. Nishi, Yusuke Shibasaki","doi":"10.1109/ICSTW52544.2021.00038","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00038","url":null,"abstract":"Testing software using machine learning and neural network is difficult due to non-linearity. This paper proposes Boosted Exploratory Test architecture to support creativity of test engineers by using a non-linear test generator. This paper also shows experimental results Boosted Exploratory Test architecture with Word2Vec is better for a smart speaker.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129439491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Tag-based Recommender System for Regression Test Case Prioritization","authors":"Maral Azizi","doi":"10.1109/ICSTW52544.2021.00035","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00035","url":null,"abstract":"In continuous integration development environments (CI), the software undergoes frequent changes due to bug fixes or new feature requests. Some of these changes may accidentally cause regression issues to the newly released software version. To ensure the correctness of the newly released software, it is important to perform enough testing prior to code submission to avoid breaking builds. Regression testing is one of the important maintenance activities that can control the quality and reliability of modified software, but it can also be very expensive. Test case prioritization can reduce the costs of regression testing by reordering test cases to meet testing objectives better. To date, various test prioritization techniques have been developed, however, the majority of the proposed approaches utilize static or dynamic analyses to decide which test cases should be selected. These analyses often have significant cost overhead and are time consuming. This paper introduces a new method for automatic test case prioritization in a CI environment intending to minimize the testing cost. Our proposed approach uses information retrieval to automatically select test cases based on their textual similarity to the portion of the code that has been changed. Our technique not only helps developers to organize and manage the software repository but also helps them to find the relevant resources quickly. To evaluate our approach, we performed an empirical study using 37 versions of 6 open source applications. The results of our empirical study indicate that our proposed method can improve the effectiveness and efficiency of test case prioritization technique.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130952988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Random Selection Might Just be Indomitable","authors":"Rowland Pitts","doi":"10.1109/ICSTW52544.2021.00014","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00014","url":null,"abstract":"Mutation Testing offers a powerful approach to assessing unit test set quality; however, software developers may be reluctant to embrace the technique due to the tremendous number of mutants it generates, including redundant and equivalent mutants. Recent research indicates that redundant mutants affect a test engineer’s work effort only slightly, whereas equivalent mutants have a direct linear impact. Moreover, the time invested analyzing equivalent mutants produces no unit tests. Dominator mutants seek to address the redundancy problem, but they require the identification of all subsumption relationships, which implicitly identifies all equivalent mutants. This research study shows that, when equally informed, random selection can perform as well as dominator mutants.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129862892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating Costs for Adopting and Using Model-Based Testing in Agile SCRUM Teams","authors":"Athanasios Karapantelakis","doi":"10.1109/ICSTW52544.2021.00042","DOIUrl":"https://doi.org/10.1109/ICSTW52544.2021.00042","url":null,"abstract":"The introduction of Model-Based testing (MBT) in software development teams introduces change in software testing, as traditional testing processes are replaced by MBT. When planning for MBT adoption, team leaders can benefit from tools that are able to provide information on the cost of introducing MBT in their teams. We have developed and present a set of models, which, given a number of initial parameters such as employee competence and availability, as well as historic usage of data, can estimate costs for MBT adoption. These models are designed based on experience from previous practicing of MBT. To demonstrate the practical value of our models, we run a number of simulations on hypothetical MBT adoption and use scenarios, which can be realistically applied to different teams considering adopting MBT.","PeriodicalId":371680,"journal":{"name":"2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133681378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}