{"title":"Using Exploration Focused Techniques to Augment Search-Based Software Testing: An Experimental Evaluation","authors":"Bogdan Marculescu, R. Feldt, R. Torkar","doi":"10.1109/ICST.2016.26","DOIUrl":"https://doi.org/10.1109/ICST.2016.26","url":null,"abstract":"Search-based software testing (SBST) often uses objective-based approaches to solve testing problems. There are, however, situations where the validity and completeness of objectives cannot be ascertained, or where there is insufficient information to define objectives at all. Incomplete or incorrect objectives may steer the search away from interesting behavior of the software under test (SUT) and from potentially useful test cases. This papers investigates the degree to which exploration-based algorithms can be used to complement an objective-based tool we have previously developed and evaluated in industry. In particular, we would like to assess how exploration-based algorithms perform in situations where little information on the behavior space is available a priori. We have conducted an experiment comparing the performance of an exploration-based algorithm with an objective-based one on a problem with a high-dimensional behavior space. In addition, we evaluate to what extent that performance degrades in situations where computational resources are limited. Our experiment shows that exploration-based algorithms are useful in covering a larger area of the behavior space and result in a more diverse solution population. Typically, of the candidate solutions that exploration-based algorithms propose, more than 80% were not covered by their objective-based counterpart. This increased diversity is present in the resulting population even when computational resources are limited. We conclude that exploration-focused algorithms are a useful means of investigating high-dimensional spaces, even in situations where limited information and limited resources are available.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122194728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unit Test Generation During Software Development: EvoSuite Plugins for Maven, IntelliJ and Jenkins","authors":"Andrea Arcuri, José Campos, G. Fraser","doi":"10.1109/ICST.2016.44","DOIUrl":"https://doi.org/10.1109/ICST.2016.44","url":null,"abstract":"Different techniques to automatically generate unit tests for object oriented classes have been proposed, but how to integrate these tools into the daily activities of software development is a little investigated question. In this paper, we report on our experience in supporting industrial partners in introducing the EvoSuite automated JUnit test generation tool in their software development processes. The first step consisted of providing a plugin to the Apache Maven build infrastructure. The move from a research-oriented point-and-click tool to an automated step of the build process has implications on how developers interact with the tool and generated tests, and therefore, we produced a plugin for the popular IntelliJ Integrated Development Environment (IDE). As build automation is a core component of Continuous Integration (CI), we provide a further plugin to the Jenkins CI system, which allows developers to monitor the results of EvoSuite and integrate generated tests in their source tree. In this paper, we discuss the resulting architecture of the plugins, and the challenges arising when building such plugins. Although the plugins described are targeted for the EvoSuite tool, they can be adapted and their architecture can be reused for other test generation tools as well.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115369153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Ye, Lingming Zhang, Linzhang Wang, Xuandong Li
{"title":"An Empirical Study on Detecting and Fixing Buffer Overflow Bugs","authors":"Tao Ye, Lingming Zhang, Linzhang Wang, Xuandong Li","doi":"10.1109/ICST.2016.21","DOIUrl":"https://doi.org/10.1109/ICST.2016.21","url":null,"abstract":"Buffer overflow is one of the most common types of software security vulnerabilities. Although researchers have proposed various static and dynamic techniques for buffer overflow detection, buffer overflow attacks against both legacy and newly-deployed software systems are still quite prevalent. Compared with dynamic detection techniques, static techniques are more systematic and scalable. However, there are few studies on the effectiveness of state-of-the-art static buffer overflow detection techniques. In this paper, we perform an in-depth quantitative and qualitative study on static buffer overflow detection. More specifically, we obtain both the buggy and fixed versions of 100 buffer overflow bugs from 63 real-world projects totalling 28 MLoC (Millions of Lines of Code) based on the reports in Common Vulnerabilities and Exposures (CVE). Then, quantitatively, we apply Fortify, Checkmarx, and Splint to all the buggy versions to investigate their false negatives, and also apply them to all the fixed versions to investigate their false positives. We also qualitatively investigate the causes for the false-negatives and false-positives of studied techniques to guide the design and implementation of more advanced buffer overflow detection techniques. Finally, we also categorized the patterns of manual buffer overflow repair actions to guide automated repair techniques for buffer overflow. The experiment data is available at http://bo-study.github.io/Buffer-Overflow-Cases/.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121404959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Teng Long, Il-Chul Yoon, A. Porter, A. Memon, A. Sussman
{"title":"Coordinated Collaborative Testing of Shared Software Components","authors":"Teng Long, Il-Chul Yoon, A. Porter, A. Memon, A. Sussman","doi":"10.1109/ICST.2016.38","DOIUrl":"https://doi.org/10.1109/ICST.2016.38","url":null,"abstract":"Software developers commonly build their software systems by reusing other components developed and maintained by third-party developer groups. As the components evolve over time, new end-user machine configurations that contain new component versions will be added continuously for the potential user base. Therefore developers must test whether their components function correctly in the new configurations to ensure the quality of the overall systems. This would be achievable if developers could provision the configurations in house and conduct regression testing over the configurations. However, this is often very time-consuming and also there can be redundancy in test effort between developers when a common set of components is reused for providing the functionality of the systems. In this paper, we present a coordinated collaborative regression testing process for multiple developer groups. It involves a scheduling method for distributing test effort across the groups at component updates, with the objectives of reducing test redundancy between the groups and also shortening the time window in which compatibility faults are exposed to user community. The process is implemented on Conch, a collaborative test data repository and services we developed in our previous work. Conch has been modified to function as the test process coordinator, as well as the shared repository of test data. Our experiments over the 1.5-year evolution history of eleven components in the Ubuntu developer community show that developers can quickly discover compatibility faults by applying the coordinated process. Moreover, total testing time is comparable to the scenario where the developers conduct regression testing only at updates of their own components.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116510106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symbooglix: A Symbolic Execution Engine for Boogie Programs","authors":"D. Liew, Cristian Cadar, A. Donaldson","doi":"10.1109/ICST.2016.11","DOIUrl":"https://doi.org/10.1109/ICST.2016.11","url":null,"abstract":"We present the design and implementation of Symbooglix, a symbolic execution engine for the Boogie intermediate verification language. Symbooglix aims to find bugs in Boogie programs efficiently, providing bug-finding capabilities for any program analysis framework that uses Boogie as a target language. We discuss the technical challenges associated with handling Boogie, and describe how we optimised Symbooglix using a small training set of benchmarks. This empirically-driven optimisation approach avoids over-fitting Symbooglix to ourbenchmarks, enabling a fair comparison with other tools. We present an evaluation across 3749 Boogie programs generated from the SV-COMP suite of C programs using the SMACK front-end, and 579 Boogie programs originating from several OpenCL and CUDA GPU benchmark suites, translated by the GPU Verify front-end. Our results show that Symbooglix significantly out-performs Boogaloo, an existing symbolic execution tool for Boogie, and is competitivewith GPUVerify on benchmarks for which GPUVerify is highly optimised. While generally less effective than the Corral and Duality tools on the SV-COMP suite, Symbooglix is complementary to them in terms of bug-finding ability.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126277904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Skyfire: Model-Based Testing with Cucumber","authors":"Nan Li, Anthony Escalona, Tariq Kamal","doi":"10.1109/ICST.2016.41","DOIUrl":"https://doi.org/10.1109/ICST.2016.41","url":null,"abstract":"In the software industry, a Behavior-Driven Development (BDD) tool, Cucumber, has been widely used by practitioners. Usually product analysts, developers, and testers manually write BDD test scenarios that describe system behaviors. Testers write implementation for the BDD scenarios by hand and execute the Cucumber tests. Cucumber provides transparency about what test scenarios are covered and how the test scenarios are mapped to executable tests. One drawback of the Cucumber BDD approach is that test scenarios are generated manually. Thus, the test scenarios are usually weak. More importantly, practitioners do not have a metric to measure test coverage. In this paper, we present a Model-Based Testing (MBT) tool, skyfire. Skyfire can automatically generate effective Cucumber test scenarios to replace manually generated test scenarios. Skyfire reads a behavioral UML diagram (e.g., a state machine diagram), identifies all necessary elements (e.g., transitions) of the diagram, generates effective tests to satisfy various graph coverage criteria, and converts the tests into Cucumber scenarios. Then testers write Cucumber mappings for the generated scenarios. Skyfire does not only generate effective tests but is also completely compatible with the existing agile development and continuous integration (CI) rhythm. We present the design architecture and implementation of skyfire, as well as an industrial case study to show how skyfire is used in practice.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121883162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Repeated Combinatorial Test Design -- Unleashing the Potential in Multiple Testing Iterations","authors":"Itai Segall","doi":"10.1109/ICST.2016.14","DOIUrl":"https://doi.org/10.1109/ICST.2016.14","url":null,"abstract":"Test design is the process of planning and designing the tests to be performedon a software system. The time scale at which organizations perform and modifytheir test design is typically orders of magnitude larger than that ofdevelopment, especially in modern agile development processes. As a result, often the carefully designed and optimized test suites end up beingrepeatedly executed with no variability between iterations, thus wasting crucialtime and resources. In this work we propose a repeated test planningprocess based on Combinatorial Test Design (CTD), where test iterations areplanned and executed while taking into account the previously executediterations. While each iteration satisfies the same test requirements as before, we leverage the degrees of freedom in test planning in order to reach fullcoverage of higher levels of requirements over the course of iterations. Wesuggest algorithms for doing so efficiently, and evaluate our approach over aset of models collected from different sources in literature. The evaluation demonstrates significant improvement in coverage rate of higher levels, comparedto the naïve approaches.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129138379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing Concurrent Software Systems","authors":"F. A. Bianchi","doi":"10.1109/ICST.2016.45","DOIUrl":"https://doi.org/10.1109/ICST.2016.45","url":null,"abstract":"Many modern software systems are intrinsically concurrent, consisting of multiple execution flows that progress simultaneously. Concurrency introduces new challenges to the testing process, since some faults can manifest only under specific interleavings of the execution flows. Thus, testing concurrent systems requires not only to explore the space of possible inputs, but also the space of possible interleavings, which can be intractable also for small programs. Most of the research on testing concurrent systems has focused exclusively on selecting interleavings that expose potentially dangerous patterns, such as data races and atomicity violations. However, by ignoring the high-level semantics of the system, these patterns may miss relevant faults and generate many false positive alarms. In this PhD work we aim to define a novel testing technique that (i) offers a holistic approach to select both test inputs and interleavings, and (ii) exploits high-level semantic information on the program to guide the selection and to improve the accuracy of the state-of-the-art techniques.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125137233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Verification Methodology for Fully Autonomous Heavy Vehicles","authors":"J. Gustavsson","doi":"10.1109/ICST.2016.42","DOIUrl":"https://doi.org/10.1109/ICST.2016.42","url":null,"abstract":"The introduction of fully autonomous vehicles poses a number of concerns regarding the safety and dependability of vehicle operation. Best practice standards within the automotive industry rely on the driver operating the vehicle. With the transition away from manual control, an increased emphasis has to be placed on verification during the vehicle development stages. The work presented within this paper aims to establish a framework for the various verification activities performed during development, and their impact on the safety of the vehicle, as well as a set of guidelines for verification of the decision making process of autonomous vehicles.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115491371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic Testing of Interactive Applications","authors":"D. Zuddas","doi":"10.1109/ICST.2016.46","DOIUrl":"https://doi.org/10.1109/ICST.2016.46","url":null,"abstract":"Interactive applications, such as mobile or web apps, are now essential in our life, and verifying their reliability has become a key issue. Automatically generating test cases can dramatically improve the testing process, and has recently drawn the attention of researchers. So far, automatic test case generation techniques have exploited the structural characteristics of the GUI of interactive applications, paying little attention to the semantic aspects of the applications. In my PhD, I plan to define new approaches to automatically test interactive applications by exploiting inferred semantics information.","PeriodicalId":155554,"journal":{"name":"2016 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116243349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}