{"title":"SEViz: A Tool for Visualizing Symbolic Execution","authors":"Dávid Honfi, András Vörös, Zoltán Micskei","doi":"10.1109/ICST.2015.7102631","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102631","url":null,"abstract":"Generating test inputs from source code is a topic that is starting to transfer from academic research to industrial application. Symbolic execution is one of the promising techniques for such white-box test generation. However, test input generation for non-trivial programs often reaches limitations even when using the most mature tools due to the underlying complexity of the problem. In such cases, visualizing the symbolic execution and the test generation process could help to quickly identify required configurations and modifications that enable the generation of further test inputs and increase coverage. We present a tool that is able to interactively visualize symbolic execution. We also show how this tool can be used for educational and engineering purposes.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"379 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133346630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Seamless Integration of Test Information Management and Calibration Data Management in the Overall Automotive Development Process","authors":"C. E. Salloum","doi":"10.1109/ICST.2015.7102629","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102629","url":null,"abstract":"Testing and calibration plays a significant role in the automotive development process. In order to shorten development cycles and to enable the detection of problems at early design phases, OEMs and suppliers are often operating a highly complex testing infrastructure covering all development stages including pure simulation in the office, engine test, drive train test, chassis test, vehicle test on the test bed and road test. A multitude of IT systems from different vendors often complemented by custom in-house solutions is usually required to make testing in such scenarios manageable. Examples are systems for requirement management, test management, calibration management, measurement data management, model management and larger systems for application and product lifecycle management. All these systems increase the productivity within their domain, but an increase of productivity on the macro level can only be achieved when these tool are well integrated to enable seamless collaboration among the different engineering domains and stakeholders. In larger organizations, the cost and time required to integrate the individual IT systems into a seamless system engineering environment is extraordinarily high. The Openness of an IT system with respect to system integration is the cornerstone to reduce this effort. The ARTEMIS CRYSTAL project has recognized this need. With 68 partners and a budget of over 82 million Euro, it has the objective to develop an open Interoperability Specification that enables tools to share and interlink their data in a typical multi-vendor environment. In this presentation we will show how the CRYSTAL Interoperability Specification can be applied in a typical automotive testing and calibration scenario. In particular we will present how systems for Calibration Data Management and Test Information Management can be integrated in the overall automotive development process. We will present details about the technical realization, assess the implementation overhead and show the benefits gained by a better tool integration.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132353001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iterative Instrumentation for Code Coverage in Time-Sensitive Systems","authors":"Tosapon Pankumhang, M. Rutherford","doi":"10.1109/ICST.2015.7102594","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102594","url":null,"abstract":"In software testing, runtime code coverage is usually measured by instrumenting the executable code. In most cases the effect of the additional instructions is negligible, but for time-sensitive systems it can potentially alter the timing of executing code. This may lead to inconsistent test results, as the tests behave differently when run against non-instrumented code. In this paper, we discuss traditional code coverage techniques (i.e. instrumentation) and our code coverage technique called \"iterative instrumentation\" which has no runtime overhead. We analyze the impact of instrumentation runtime overhead through a case study of heuristic pathfinders. Next, we compare the effectiveness of iterative instrumentation to traditional instrumentation for measuring code coverage on a case study of control software. Finally, we discuss possible techniques to improve the quality of our technique including the limitations of this paper. Our studies confirm that instrumentation runtime overhead can alter the timing of time-sensitive software systems while our technique can be used effectively with no runtime overhead.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"332 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134479862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sébastien Bardin, Mickaël Delahaye, Robin David, N. Kosmatov, Mike Papadakis, Yves Le Traon, J. Marion
{"title":"Sound and Quasi-Complete Detection of Infeasible Test Requirements","authors":"Sébastien Bardin, Mickaël Delahaye, Robin David, N. Kosmatov, Mike Papadakis, Yves Le Traon, J. Marion","doi":"10.1109/ICST.2015.7102607","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102607","url":null,"abstract":"In software testing, coverage criteria specify the requirements to be covered by the test cases. However, in practice such criteria are limited due to the well-known infeasibility problem, which concerns elements/requirements that cannot be covered by any test case. To deal with this issue we revisit and improve state-of-the-art static analysis techniques, such as Value Analysis and Weakest Precondition calculus. We propose a lightweight greybox scheme for combining these two techniques in a complementary way. In particular we focus on detecting infeasible test requirements in an automatic and sound way for condition coverage, multiple condition coverage and weak mutation testing criteria. Experimental results show that our method is capable of detecting almost all the infeasible test requirements, 95% on average, in a reasonable amount of time, i.e., less than 40 seconds, making it practical for unit testing.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116775483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicklas Erman, Vanja Tufvesson, Markus Borg, P. Runeson, A. Ardö
{"title":"Navigating Information Overload Caused by Automated Testing - a Clustering Approach in Multi-Branch Development","authors":"Nicklas Erman, Vanja Tufvesson, Markus Borg, P. Runeson, A. Ardö","doi":"10.1109/ICST.2015.7102596","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102596","url":null,"abstract":"Background. Test automation is a widely used technique to increase the efficiency of software testing. However, executing more test cases increases the effort required to analyze test results. At Qlik, automated tests run nightly for up to 20 development branches, each containing thousands of test cases, resulting in information overload. Aim. We therefore develop a tool that supports the analysis of test results. Method. We create NIOCAT, a tool that clusters similar test case failures, to help the analyst identify underlying causes. To evaluate the tool, experiments on manually created subsets of failed test cases representing different use cases are conducted, and a focus group meeting is held with test analysts at Qlik. Results. The case study shows that NIOCAT creates accurate clusters, in line with analyses performed by human analysts. Further, the potential time-savings of our approach is confirmed by the participants in the focus group. Conclusions. NIOCAT provides a feasible complement to current automated testing practices at Qlik by reducing information overload.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123475916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Willibald Krenn, R. Schlick, Stefan Tiran, B. Aichernig, Elisabeth Jöbstl, H. Brandl
{"title":"MoMut::UML Model-Based Mutation Testing for UML","authors":"Willibald Krenn, R. Schlick, Stefan Tiran, B. Aichernig, Elisabeth Jöbstl, H. Brandl","doi":"10.1109/ICST.2015.7102627","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102627","url":null,"abstract":"Model-based mutation testing (MBMT) is a promising testing methodology that relies on a model of the system under test (SUT) to create test cases. Hence, MBMT is a so-called black-box testing approach. It also is fault based, as it creates test cases that are guaranteed to reveal certain faults: after inserting a fault into the model of the SUT, it looks for a test case revealing this fault. This turns MBMT into one of the most powerful and versatile test case generation approaches available as its tests are able to demonstrate the absence of certain faults, can achieve both, control-flow and data-flow coverage of model elements, and also may include information about the behaviour in the failure case. The latter becomes handy whenever the test execution framework is bound in the number of observations it can make and - as a consequence - has to restrict them. However, this versatility comes at a price: MBMT is computationally expensive. The tool MoMuT::UML (https://www.momut.org) is the result of a multi-year research effort to bring MBMT from the academic drawing board to industrial use. In this paper we present the current stable version, share the lessons learnt when applying two generations of MoMuT::UML in an industrial setting, and give an outlook on the upcoming, third,generation.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129906410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephan Arlt, Tobias Morciniec, A. Podelski, Silke Wagner
{"title":"If A Fails, Can B Still Succeed? Inferring Dependencies between Test Results in Automotive System Testing","authors":"Stephan Arlt, Tobias Morciniec, A. Podelski, Silke Wagner","doi":"10.1109/ICST.2015.7102593","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102593","url":null,"abstract":"In this paper we propose an approach that, given a structured requirements specification, allows the automatic online detection of a redundant test case. This means that, at each time point during a testing phase, one automatically infers the failure of a test case from the current status of successful tests and failed tests. By a structured requirements specification we mean that one uses a hierarchical structure and types to document the (natural language) formulation of requirements. We have implemented the approach. The evaluation of our implementation in a case study in the context of the development process for Mercedes-Benz vehicles at Daimler AG indicates the practical potential of our approach.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129857556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"WebSee: A Tool for Debugging HTML Presentation Failures","authors":"Sonal Mahajan, William G. J. Halfond","doi":"10.1109/ICST.2015.7102638","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102638","url":null,"abstract":"Presentation failures in a website can negatively impact end users' perception of the quality of the website, the services it delivers, and the branding a company is trying to achieve. Presentation failures can occur easily in modern web applications because of the highly complex and dynamic nature of the HTML, CSS, and JavaScript that define a web page's visual appearance. Debugging such failures manually is time consuming and error-prone, and existing techniques do not provide an automated debugging solution. In this paper, we present our tool, WebSee, that provides a fully automated debugging solution for presentation failures in web applications. When run on real-world web applications, WebSee was able to accurately and quickly identify faulty HTML elements.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128936433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Re-Using Generators of Complex Test Data","authors":"Simon M. Poulding, R. Feldt","doi":"10.1109/ICST.2015.7102605","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102605","url":null,"abstract":"The efficiency of random testing can be improved by sampling test inputs using a generating program that incorporates knowledge about the types of input most likely to detect faults in the software-under-test (SUT). But when the input of the SUT is a complex data type--such as a domain-specific string, array, record, tree, or graph--creating such a generator may be time- consuming and may require the tester to have substantial prior experience of the domain. In this paper we propose the re-use of generators created for one SUT on other SUTs that take the same complex data type as input. The re-use of a generator in this way would have little overhead, and we hypothesise that the re-used generator will typically be as least as efficient as the most straightforward form of random testing: sampling test inputs from the uniform distribution. We investigate this proposal for two data types using five generators. We assess test efficiency against seven real-world SUTs, and in terms of both structural coverage and the detection of seeded faults. The results support the re-use of generators for complex data types, and suggest that if a library of generators is to be maintained for this purpose, it is possible to extend library generators to accommodate the specific testing requirements of newly-encountered SUTs.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128454279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generating Complex and Faulty Test Data through Model-Based Mutation Analysis","authors":"Daniel Di Nardo, F. Pastore, L. Briand","doi":"10.1109/ICST.2015.7102589","DOIUrl":"https://doi.org/10.1109/ICST.2015.7102589","url":null,"abstract":"Testing the correct behaviour of data processing systems in the presence of faulty data is extremely expensive. The data structures processed by these systems are often complex, with many data fields and multiple constraints among them. Software engineers, in charge of testing these systems, have to handcraft complex data files or databases, while ensuring compliance with the multiple constraints to prevent the generation of trivially invalid inputs. In addition, assessing test results often means analysing complex output and log data. Though many techniques have been proposed to automatically test systems based on models, little exists in the literature to support the testing of systems where the complexity is in the data consumed in input or produced in output, with complex constraints between them. In particular, such systems often need to be tested with the presence of faults in the input data, in order to assess the robustness and behaviour of the system in response to such faults. This paper presents an automated test technique that relies upon six generic mutation operators to automatically generate faulty data. The technique receives two inputs: field data and a data model, i.e. a UML class diagram annotated with stereotypes and OCL constraints. The annotated class diagram is used to tailor the behaviour of the generic mutation operators to the fault model that is assumed for the system under test and the environment in which it is deployed. Empirical results obtained with a large data acquisition system in the satellite domain show that our approach can successfully automate the generation of test suites that achieve slightly better instruction coverage than manual testing based on domain expertise.","PeriodicalId":401414,"journal":{"name":"2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126651833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}