Susumu Tokumoto, H. Yoshida, Kazunori Sakamoto, S. Honiden
{"title":"MuVM: Higher Order Mutation Analysis Virtual Machine for C","authors":"Susumu Tokumoto, H. Yoshida, Kazunori Sakamoto, S. Honiden","doi":"10.1109/ICST.2016.18","DOIUrl":"https://doi.org/10.1109/ICST.2016.18","url":null,"abstract":"Mutation analysis is a method for evaluating the effectiveness of a test suite by seeding faults artificially and measuring the fraction of seeded faults detected by the test suite. The major limitation of mutation analysis is its lengthy execution time because it involves generating, compiling and running large numbers of mutated programs, called mutants. Our tool MuVM achieves a significant runtime improvement by performing higher order mutation analysis using four techniques, meta mutation, mutation on virtual machine, higher order split-stream execution, and online adaptation technique. In order to obtain the same behavior as mutating the source code directly, meta mutation preserves the mutation location information which may potentially be lost during bit code compilation and optimization. Mutation on a virtual machine reduces the compilation and testing cost by compiling a program once and invoking a process once. Higher order split-stream execution also reduces the testing cost by executing common parts of the mutants together and splitting the execution at a seeded fault. Online adaptation technique reduces the number of generated mutants by omitting infeasible mutants. Our comparative experiments indicate that our tool is significantly superior to an existing tool, an existing technique (mutation schema generation), and no-split-stream execution in higher order mutation.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125449568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tedsuto: A General Framework for Testing Dynamic Software Updates","authors":"Luís Pina, M. Hicks","doi":"10.1109/ICST.2016.27","DOIUrl":"https://doi.org/10.1109/ICST.2016.27","url":null,"abstract":"Dynamic software updating (DSU) is a technique for patching running programs, to fix bugs or add new features. DSU avoids the downtime of stop-and-restart updates, but creates new risks -- an incorrect or ill-timed dynamic update could result in a crash or misbehavior, defeating the whole purpose of DSU. To reduce such risks, dynamic updates should be carefully tested before they are deployed. This paper presents Tedsuto, a general testing framework for DSU, along with a concrete implementation of it for Rubah, a state-of-the-art Java-based DSU system. Tedsuto uses system-level tests developed for the old and new versions of the updateable software, and systematically tests whether a dynamic update might result in a test failure. Very often this process is fully automated, while in some cases (e.g., to test new-version functionality) some manual annotations are required. To evaluate Tedsuto's efficacy, we applied it to dynamic updates previously developed (and tested in an ad hoc manner) for the H2 SQL database server and the CrossFTP server -- two real-world, multithreaded systems. We used three large test suites, totalling 446 tests, and we found a variety of update-related bugs quickly, and at low cost.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129302030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Framework for Monkey GUI Testing","authors":"Thomas Wetzlmaier, R. Ramler, Werner Putschögl","doi":"10.1109/ICST.2016.51","DOIUrl":"https://doi.org/10.1109/ICST.2016.51","url":null,"abstract":"Testing via graphical user interfaces (GUI) is a complex and labor-intensive task. Numerous techniques, tools and frameworks have been proposed for automating GUI testing. In many projects, however, the introduction of automated tests did not reduce the overall effort of testing but shifted it from manual test execution to test script development and maintenance. As a pragmatic solution, random testing approaches (aka \"monkey testing\") have been suggested for automated random exploration of the system under test via the GUI. This paper presents a versatile framework for monkey GUI testing. The framework provides reusable components and a predefined, generic workflow with extension points for developing custom-built test monkeys. It supports tailoring the monkey for a particular application scenario and the technical requirements imposed by the system under test. The paper describes the customization of test monkeys for an open source project and in an industry application, where the framework has been used for successfully transferring the idea of monkey testing into an industry solution.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"1966 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129719205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junjie Chen, Y. Bai, Dan Hao, Yingfei Xiong, Hongyu Zhang, Lu Zhang, Bing Xie
{"title":"Test Case Prioritization for Compilers: A Text-Vector Based Approach","authors":"Junjie Chen, Y. Bai, Dan Hao, Yingfei Xiong, Hongyu Zhang, Lu Zhang, Bing Xie","doi":"10.1109/ICST.2016.19","DOIUrl":"https://doi.org/10.1109/ICST.2016.19","url":null,"abstract":"Test case prioritization aims to schedule the execution order of test cases so as to detect bugs as early as possible. For compiler testing, the demand for both effectiveness and efficiency imposes challenge to test case prioritization. In the literature, most existing approaches prioritize test cases by using some coverage information (e.g., statement coverage or branch coverage), which is collected with considerable extra effort. Although input-based test case prioritization relies only on test inputs, it can hardly be applied when test inputs are programs. In this paper we propose a novel text-vector based test case prioritization approach, which prioritizes test cases for C compilers without coverage information. Our approach first transforms each test case into a text-vector by extracting its tokens which reflect fault-relevant characteristics and then prioritizes test cases based on these text-vectors. In particular, in our approach we present three prioritization strategies: greedy strategy, adaptive random strategy, and search strategy. To investigate the efficiency and effectiveness of our approach, we conduct an experiment on two C compilers (i.e., GCC and LLVM), and find that our approach is much more efficient than the existing approaches and is effective in prioritizing test cases.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131555708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mysteries of DropBox: Property-Based Testing of a Distributed Synchronization Service","authors":"John Hughes, B. Pierce, T. Arts, U. Norell","doi":"10.1109/ICST.2016.37","DOIUrl":"https://doi.org/10.1109/ICST.2016.37","url":null,"abstract":"File synchronization services such as Dropbox are used by hundreds ofmillions of people to replicate vital data. Yet rigorous models of theirbehavior are lacking. We present the first formal -- and testable -- model ofthe core behavior of a modern file synchronizer, and we use it to discoversurprising behavior in two widely deployed synchronizers. Our model isbased on a technique for testing nondeterministic systems that avoidsrequiring that the system's internal choices be made visible to the testing framework.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114714385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abdulmajeed Alameer, Sonal Mahajan, William G. J. Halfond
{"title":"Detecting and Localizing Internationalization Presentation Failures in Web Applications","authors":"Abdulmajeed Alameer, Sonal Mahajan, William G. J. Halfond","doi":"10.1109/ICST.2016.36","DOIUrl":"https://doi.org/10.1109/ICST.2016.36","url":null,"abstract":"Web applications can be easily made available to an international audience by leveraging frameworks and tools for automatic translation and localization. However, these automated changes can distort the appearance of web applications since it is challenging for developers to design their websites to accommodate the expansion and contraction of text after it is translated to another language. Existing web testing techniques do not support developers in checking for these types of problems and manually checking every page in every language can be a labor intensive and error prone task. To address this problem, we introduce an automated technique for detecting when a web page's appearance has been distorted due to internationalization efforts and identifying the HTML elements or text responsible for the observed problem. In evaluation, our approach was able to detect internationalization problems in a set of 54 web applications with high precision and recall and was able to accurately identify the underlying elements in the web pages that led to the observed problem.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115819433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Interpreting Coverage Information Using Direct and Indirect Coverage","authors":"Chen Huo, J. Clause","doi":"10.1109/ICST.2016.20","DOIUrl":"https://doi.org/10.1109/ICST.2016.20","url":null,"abstract":"Because of the numerous benefits of tests, developers often wish their applications had more tests. Unfortunately, it is challenging to determine what new tests to add in order to improve the quality of the test suite. A number of approaches, including numerous coverage criteria, have been proposed by the research community to help developers focus their limited testing resources. However, coverage criteria often fall short of this goal because achieving 100% coverage is often infeasible, necessitating the difficult process of determining if a piece of uncovered code is actually executable, and the criteria do not take into account how the code is covered. In this paper, we propose a new approach for interpreting coverage information, based on the concepts of direct coverage and indirect coverage, that address these limitations. We also presents the results of an empirical study of 17 applications that demonstrate that indirectly covered code is common in real world software, faults in indirectly covered code are significantly less likely to be detected than faults located in directly covered code, and indirectly covered code typically clusters at the method level. This means that identifying indirectly covered methods can be effective at helping testers improve the quality of their test suites by directing them to insufficiently tested code.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131504844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Debugging without Testing","authors":"Wided Ghardallou, Nafi Diallo, A. Mili, M. Frias","doi":"10.1109/ICST.2016.12","DOIUrl":"https://doi.org/10.1109/ICST.2016.12","url":null,"abstract":"It is so inconceivable to debug a program without testing it that these two words are used nearly interchangeably. Yet we argue that using the concept of relative correctness we can indeed remove a fault from a program and prove that the fault has been removed, by proving that the new program is more correct than the original. This is a departure from the traditional roles of proving and testing methods, whereby static proof methods are applied to a correct program to prove its correctness, and dynamic testing methods are applied to an incorrect program to expose its faults.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131679947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self-Healing Cloud Applications","authors":"Rui Xin","doi":"10.1109/ICST.2016.50","DOIUrl":"https://doi.org/10.1109/ICST.2016.50","url":null,"abstract":"Cloud computing offers on-demand services to deploy and run applications with flexible and scalable resource pooling. The techniques adopted by cloud systems introduce reliability issues that challenge the design of cloud applications. In my PhD I work on the key problem of improving the reliability of cloud applications. In particular, I am investigating on the definition of effective and efficient self-healing approaches that integrate failure prediction, fault localization and fault fixing mechanisms in the context of cloud-based systems. In the first part of my research I investigated the problem of automatic failure prediction, which constitutes the first step of a complete self-healing approach. I identified an original approach based on a combination of data analytics and machine learning techniques, and developed an early prototype to collect experimental data about the proposed approach. The data collected so far indicate that the approach has both high precision and recall rate.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132558143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boyang Li, Christopher Vendome, M. Vásquez, D. Poshyvanyk, Nicholas A. Kraft
{"title":"Automatically Documenting Unit Test Cases","authors":"Boyang Li, Christopher Vendome, M. Vásquez, D. Poshyvanyk, Nicholas A. Kraft","doi":"10.1109/ICST.2016.30","DOIUrl":"https://doi.org/10.1109/ICST.2016.30","url":null,"abstract":"Maintaining unit test cases is important during the maintenance and evolution of a software system. In particular, automatically documenting these unit test cases can ameliorate the burden on developers maintaining them. For instance, by relying on up-to-date documentation, developers can more easily identify test cases that relate to some new or modified functionality of the system. We surveyed 212 developers (both industrial and open-source) to understand their perspective towards writing, maintaining, and documenting unit test cases. In addition, we mined change histories of C# software systems and empirically found that unit test methods seldom had preceding comments and infrequently had inner comments, and both were rarely modified as those methods were modified. In order to support developers in maintaining unit test cases, we propose a novel approach - UnitTestScribe - that combines static analysis, natural language processing, backward slicing, and code summarization techniques to automatically generate natural language documentation of unit test cases. We evaluated UnitTestScribe on four subject systems by means of an online survey with industrial developers and graduate students. In general, participants indicated that UnitTestScribe descriptions are complete, concise, and easy to read.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123646170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}