{"title":"An Evolutionary Approach to Adapt Tests Across Mobile Apps","authors":"L. Mariani, M. Pezzè, Valerio Terragni, D. Zuddas","doi":"10.1109/AST52587.2021.00016","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00016","url":null,"abstract":"Automatic generators of GUI tests often fail to generate semantically relevant test cases, and thus miss important test scenarios. To address this issue, test adaptation techniques can be used to automatically generate semantically meaningful GUI tests from test cases of applications with similar functionalities.In this paper, we present ADAPTDROID, a technique that approaches the test adaptation problem as a search-problem, and uses evolutionary testing to adapt GUI tests (including oracles) across similar Android apps. In our evaluation with 32 popular Android apps, ADAPTDROID successfully adapted semantically relevant test cases in 11 out of 20 cross-app adaptation scenarios.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129105903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated User Experience Testing through Multi-Dimensional Performance Impact Analysis","authors":"Chidera Biringa, Gökhan Kul","doi":"10.1109/AST52587.2021.00024","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00024","url":null,"abstract":"Although there are many automated software testing suites, they usually focus on unit, system, and interface testing. However, especially software updates such as new security features have the potential to diminish user experience. In this paper, we propose a novel automated user experience testing methodology that learns how code changes impact the time unit and system tests take, and extrapolate user experience changes based on this information. Such a tool can be integrated into existing continuous integration pipelines, and it provides software teams immediate user experience feedback. We construct a feature set from lexical, layout, and syntactic characteristics of the code, and using Abstract Syntax Tree-Based Embeddings, we can calculate the approximate semantic distance to feed into a machine learning algorithm. In our experiments, we use several regression methods to estimate the time impact of software updates. Our open-source tool achieved a 3.7% mean absolute error rate with a random forest regressor.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"39 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134095726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dániel Vince, Renáta Hodován, Daniella Bársony, Ákos Kiss
{"title":"Extending Hierarchical Delta Debugging with Hoisting","authors":"Dániel Vince, Renáta Hodován, Daniella Bársony, Ákos Kiss","doi":"10.1109/AST52587.2021.00015","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00015","url":null,"abstract":"Minimizing failing test cases is an important pre-processing step on the path of debugging. If much of a test case that triggered a bug does not contribute to the actual failure, then the time required to fix the bug can increase considerably. However, test case reduction itself can be a time consuming task, especially if done manually. Therefore, automated minimization techniques have been proposed, the minimizing Delta Debugging (DDMIN) and the Hierarchical Delta Debugging (HDD) algorithms being the most well known. DDMIN does not need any information about the structure of the test case, thus it works for any kind of input. If the structure is known, however, it can be utilized to create smaller test cases faster. This is exemplified by HDD, which works on tree-structured inputs, pruning subtrees at each level of the tree with the help of DDMIN.In this paper, we propose to extend HDD with a reduction method that does not prune subtrees, but replaces them with compatible subtrees further down the hierarchy, called hoisting. We have evaluated various combinations of pruning and hoisting on multiple test suites and found that hoisting can help to further reduce the size of test cases by as much as 80% compared to the baseline HDD. We have also compared our results to other state-of-the-art test case reduction algorithms and found that HDD extended with hoisting can produce smaller output in most of the cases.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128422081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ali Sedaghatbaf, M. H. Moghadam, Mehrdad Saadatmand
{"title":"Automated Performance Testing Based on Active Deep Learning","authors":"Ali Sedaghatbaf, M. H. Moghadam, Mehrdad Saadatmand","doi":"10.1109/AST52587.2021.00010","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00010","url":null,"abstract":"Generating tests that can reveal performance issues in large and complex software systems within a reasonable amount of time is a challenging task. On one hand, there are numerous combinations of input data values to explore. On the other hand, we have a limited test budget to execute tests. What makes this task even more difficult is the lack of access to source code and the internal details of these systems. In this paper, we present an automated test generation method called ACTA for black-box performance testing. ACTA is based on active learning, which means that it does not require a large set of historical test data to learn about the performance characteristics of the system under test. Instead, it dynamically chooses the tests to execute using uncertainty sampling. ACTA relies on a conditional variant of generative adversarial networks, and facilitates specifying performance requirements in terms of conditions and generating tests that address those conditions. We have evaluated ACTA on a benchmark web application, and the experimental results indicate that this method is comparable with random testing, and two other machine learning methods, i.e. PerfXRL and DN.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117059484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extreme mutation testing in practice: An industrial case study","authors":"M. Betka, S. Wagner","doi":"10.1109/AST52587.2021.00021","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00021","url":null,"abstract":"Mutation testing is used to evaluate the effectiveness of test suites. In recent years, a promising variation called extreme mutation testing emerged that is computationally less expensive. It identifies methods where their functionality can be entirely removed, and the test suite would not notice it, despite having coverage. These methods are called pseudo-tested. In this paper, we compare the execution and analysis times for traditional and extreme mutation testing and discuss what they mean in practice. We look at how extreme mutation testing impacts current software development practices and discuss open challenges that need to be addressed to foster industry adoption. For that, we conducted an industrial case study consisting of running traditional and extreme mutation testing in a large software project from the semiconductor industry that is covered by a test suite of more than 11,000 unit tests. In addition to that, we did a qualitative analysis of 25 pseudo-tested methods and interviewed two experienced developers to see how they write unit tests and gathered opinions on how useful the findings of extreme mutation testing are. Our results include execution times, scores, numbers of executed tests and mutators, reasons why methods are pseudo-tested, and an interview summary. We conclude that the shorter execution and analysis times are well noticeable in practice and show that extreme mutation testing supplements writing unit tests in conjunction with code coverage tools. We propose that pseudo-tested code should be highlighted in code coverage reports and that extreme mutation testing should be performed when writing unit tests rather than in a decoupled session. Future research should investigate how to perform extreme mutation testing while writing unit tests such that the results are available fast enough but still meaningful.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123763451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Survey of Video Game Testing","authors":"Cristiano Politowski, Fábio Petrillo, Yann-Gaël Guéhéneuc","doi":"10.1109/AST52587.2021.00018","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00018","url":null,"abstract":"Video-game projects are notorious for having day- one bugs, no matter how big their budget or team size. The quality of a game is essential for its success. This quality could be assessed and ensured through testing. However, to the best of our knowledge, little is known about video-game testing. In this paper, we want to understand how game developers perform game testing. We investigate, through a survey, the academic and gray literature to identify and report on existing testing processes and how they could automate them. We found that game developers rely, almost exclusively, upon manual play-testing and the testers' intrinsic knowledge. We conclude that current testing processes fall short because of their lack of automation, which seems to be the natural next step to improve the quality of games while maintaining costs. However, the current game-testing techniques may not generalize to different types of games.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125188356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefan Karlsson, Adnan vCauvsevi'c, Daniel Sundmark
{"title":"Automatic Property-based Testing of GraphQL APIs","authors":"Stefan Karlsson, Adnan vCauvsevi'c, Daniel Sundmark","doi":"10.1109/AST52587.2021.00009","DOIUrl":"https://doi.org/10.1109/AST52587.2021.00009","url":null,"abstract":"In recent years, GraphQL has become a popular way to expose web APIs. With its raise of adoption in industry, the quality of GraphQL APIs must be also assessed, as with any part of a software system, and preferably in an automated manner. However, there is currently a lack of methods to automatically generate tests to exercise GraphQL APIs.In this paper, we propose a method for automatically producing GraphQL queries to test GraphQL APIs. This is achieved using a property-based approach to create a generator for queries based on the GraphQL schema of the system under test.Our evaluation on a real world software system shows that this approach is both effective, in terms of finding real bugs, and efficient, as a complete schema can be covered in seconds. In addition, we evaluate the fault finding capability of the method when seeding known faults. 73% of the seeded faults were found, with room for improvements with regards to domain-specific behavior, a common oracle challenge in automatic test generation.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117340566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Program Committee","authors":"Janice G. Thorpe","doi":"10.1109/icdm.2018.00008","DOIUrl":"https://doi.org/10.1109/icdm.2018.00008","url":null,"abstract":"Thorpe Communication a variety of courses Her main academic interests include online education, instructional communication, and assessment of student learning. She is the Director of the Online program, Course Director for Research Methods, as well as the Assessment Director for the Communication Department at UCCS. Janice recieved a BA from the University of Tennessee in Education, a BS from the University of Colorado Denver in Communication, and a PhD from the University of Colorado Denver in Education - Research, Evaluation, and Measurement. throughout her academic career whether working as Director of Composition, Co-Directing the Denver Writing Project, serving as Chair of Faculty Assembly, or developing a campus-wide mentoring program. At the heart of her work is a desire to empower people through increased access to written literacy. Joanne's most recent co-authored book, Writing and School Reform (2017), focuses on the barriers to accessing written literacy in the articulation from high school to college. She is currently working in the areas of critical media literacy and digital ethics in an effort to expand our theoretical and pedagogical responses to the current challenges and opportunities of mass digital writing.","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123080319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Title Page i","authors":"","doi":"10.1109/ast52587.2021.00001","DOIUrl":"https://doi.org/10.1109/ast52587.2021.00001","url":null,"abstract":"","PeriodicalId":315603,"journal":{"name":"2021 IEEE/ACM International Conference on Automation of Software Test (AST)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114368540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}