Xavier Devroey, Gilles Perrouin, Mike Papadakis, Axel Legay, Pierre-Yves Schobbens, P. Heymans
{"title":"Automata Language Equivalence vs. Simulations for Model-Based Mutant Equivalence: An Empirical Evaluation","authors":"Xavier Devroey, Gilles Perrouin, Mike Papadakis, Axel Legay, Pierre-Yves Schobbens, P. Heymans","doi":"10.1109/ICST.2017.46","DOIUrl":"https://doi.org/10.1109/ICST.2017.46","url":null,"abstract":"Mutation analysis is a popular test assessment method. It relies on the mutation score, which indicates how many mutants are revealed by a test suite. Yet, there are mutants whose behaviour is equivalent to the original system, wasting analysis resources and preventing the satisfaction of the full (100%) mutation score. For finite behavioural models, the Equivalent Mutant Problem (EMP) can be addressed through language equivalence of non-deterministic finite automata, which is a well-studied, yet computationally expensive, problem in automata theory. In this paper, we report on our preliminary assessment of a state-of-the-art exact language equivalence tool to handle the EMP against 3 models of size up to 15,000 states on 1170 mutants. We introduce random and mutation-biased simulation heuristics as baselines for comparison. Results show that the exact approach is often more than ten times faster in the weak mutation scenario. For strong mutation, our biased simulations are faster for models larger than 300 states. They can be up to 1,000 times faster while limiting the error of misclassifying non-equivalent mutants as equivalent to 10% on average. We therefore conclude that the approaches can be combined for improved efficiency.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116105977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Recovering Semantic Traceability Links between APIs and Security Vulnerabilities: An Ontological Modeling Approach","authors":"Sultan S. Al-Qahtani, Ellis E. Eghan, J. Rilling","doi":"10.1109/ICST.2017.15","DOIUrl":"https://doi.org/10.1109/ICST.2017.15","url":null,"abstract":"Over the last decade, a globalization of the software industry took place, which facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the software engineering community, with not only components but also their problems and vulnerabilities being now shared. For example, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing these vulnerabilities at a global scale becomes an inherently difficult task since many of the existing resources required for such analysis still rely on proprietary knowledge representation. In this research, we introduce an ontology-based knowledge modeling approach that can eliminate such information silos. More specifically, we focus on linking security knowledge with other software knowledge to improve traceability and trust in software products (APIs). Our approach takes advantage of the Semantic Web and its reasoning services, to trace and assess the impact of security vulnerabilities across project boundaries. We present a case study, to illustrate the applicability and flexibility of our ontological modeling approach by tracing vulnerabilities across project and resource boundaries.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123669907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"O!Snap: Cost-Efficient Testing in the Cloud","authors":"Alessio Gambi, Alessandra Gorla, A. Zeller","doi":"10.1109/ICST.2017.51","DOIUrl":"https://doi.org/10.1109/ICST.2017.51","url":null,"abstract":"Porting a testing environment to a cloud infrastructure is not straightforward. This paper presents O!Snap, an approach to generate test plans to cost-efficiently execute tests in the cloud. O!Snap automatically maximizes reuse of existing virtual machines, and interleaves the creation of updated test images with the execution of tests to minimize overall test execution time and/or cost. In an evaluation involving 2,600+ packages and 24,900+ test jobs of the Debian continuous integration environment, O!Snap reduces test setup time by up to 88% and test execution time by up to 43.3% without additional costs.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122677985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augusto Born de Oliveira, S. Fischmeister, Amer Diwan, Matthias Hauswirth, P. Sweeney
{"title":"Perphecy: Performance Regression Test Selection Made Simple but Effective","authors":"Augusto Born de Oliveira, S. Fischmeister, Amer Diwan, Matthias Hauswirth, P. Sweeney","doi":"10.1109/ICST.2017.17","DOIUrl":"https://doi.org/10.1109/ICST.2017.17","url":null,"abstract":"Developers of performance sensitive production software are in a dilemma: performance regression tests are too costly to run at each commit, but skipping the tests delays and complicates performance regression detection. Ideally, developers would have a system that predicts whether a given commit is likely to impact performance and suggests which tests to run to detect a potential performance regression. Prior approaches towards this problem require static or dynamic analyses that limit their generality and applicability. This paper presents an approach that is simple and general, and that works surprisingly well for real applications.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133704868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eunji Jeong, Namgoo Lee, Jinhan Kim, Duseok Kang, S. Ha
{"title":"FIFA: A Kernel-Level Fault Injection Framework for ARM-Based Embedded Linux System","authors":"Eunji Jeong, Namgoo Lee, Jinhan Kim, Duseok Kang, S. Ha","doi":"10.1109/ICST.2017.10","DOIUrl":"https://doi.org/10.1109/ICST.2017.10","url":null,"abstract":"Emulating fault scenarios by injecting faults intentionally is commonly used to test and verify the robustness of a system. As the number of hardware devices integrated into an embedded system tends to increase consistently and the chance of hardware failure is expected to increase in an SoC, it becomes important to emulate fault scenarios caused by hardware-related errors. To this end, we present a kernel-level fault injection framework for ARM-based embedded Linux systems, called FIFA, aiming to investigate the effect of an individual hardware error in a real hardware platform rather than performing statistical analysis by random experiments. FIFA consists of two complementary fault injection techniques, one is based on the Kernel GNU Debugger and the other on hardware breakpoints. Compared with the previous work that emulates bit-flip errors only, FIFA supports other types of errors such as time delay and device failure. The viability of the proposed framework is proved by real-life experiments with an ODROID-XU4 system.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133664971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Semantic Similarity in Crawling-Based Web Application Testing","authors":"Jun-Wei Lin, Farn Wang, Paul Chu","doi":"10.1109/ICST.2017.20","DOIUrl":"https://doi.org/10.1109/ICST.2017.20","url":null,"abstract":"To automatically test web applications, crawling-based techniques are usually adopted to mine the behavior models, explore the state spaces or detect the violated invariants of the applications. However, their broad use is limited by the required manual configurations for input value selection, GUI state comparison and clickable detection. In existing crawlers, the configurations are usually string-matching based rules looking for tags or attributes of DOM elements, and often application-specific. Moreover, in input topic identification, it can be difficult to determine which rule suggests a better match when several rules match an input field to more than one topic. This paper presents a natural-language approach based on semantic similarity to address the above issues. The proposed approach represents DOM elements as vectors in a vector space formed by the words used in the elements. The topics of encountered input fields during crawling can then be inferred by their similarities with ones in a labeled corpus. Semantic similarity can also be applied to suggest if a GUI state is newly discovered and a DOM element is clickable under an unsupervised learning paradigm. We evaluated the proposed approach in input topic identification with 100 real-world forms and GUI state comparison with real data from industry. Our evaluation shows that the proposed approach has comparable or better performance to the conventional techniques. Experiments in input topic identification also show that the accuracy of the rule-based approach can be improved by up to 22% when integrated with our approach.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133235709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavioral Execution Comparison: Are Tests Representative of Field Behavior?","authors":"Qianqian Wang, Yuriy Brun, A. Orso","doi":"10.1109/ICST.2017.36","DOIUrl":"https://doi.org/10.1109/ICST.2017.36","url":null,"abstract":"Software testing is the most widely used approach for assessing and improving software quality, but it is inherently incomplete and may not be representative of how the software is used in the field. This paper addresses the questions of to what extent tests represent how real users use software, and how to measure behavioral differences between test and field executions. We study four real-world systems, one used by endusers and three used by other (client) software, and compare test suites written by the systems' developers to field executions using four models of behavior: statement coverage, method coverage, mutation score, and a temporal-invariant-based model we developed. We find that developer-written test suites fail to accurately represent field executions: the tests, on average, miss 6.2% of the statements and 7.7% of the methods exercised in the field, the behavior exercised only in the field kills an extra 8.6% of the mutants, finally, the tests miss 52.6% of the behavioral invariants that occur in the field. In addition, augmenting the in-house test suites with automatically-generated tests by a tool targeting high code coverage only marginally improves the tests' behavioral representativeness. These differences between field and test executions—and in particular the finer-grained and more sophisticated ones that we measured using our invariantbased model—can provide insight for developers and suggest a better method for measuring test suite quality.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122138643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haiyang Sun, Andrea Rosà, Omar Javed, Walter Binder
{"title":"ADRENALIN-RV: Android Runtime Verification Using Load-Time Weaving","authors":"Haiyang Sun, Andrea Rosà, Omar Javed, Walter Binder","doi":"10.1109/ICST.2017.61","DOIUrl":"https://doi.org/10.1109/ICST.2017.61","url":null,"abstract":"Android has become one of the most popular operating systems for mobile devices. As the number of applications for the Android ecosystem grows, so is their complexity, increasing the need for runtime verification on the Android platform. Unfortunately, despite the presence of several runtime verification frameworks for Java bytecode, DEX bytecode used in Android does not benefit from such a wide support. While a few runtime verification tools support applications developed for Android, such tools offer only limited bytecode coverage and may not be able to detect property violations in certain classes. In this paper, we show that ADRENALIN-RV, our new runtime verification tool for Android, overcomes this limitation. In contrast to other frameworks, ADRENALIN-RV weaves monitoring code at load time and is able to instrument all loaded classes. In addition to the default classes inside the application package (APK), ADRENALIN-RV covers both the Android class library and libraries dynamically loaded from the storage, network, or generated dynamically, which related tools cannot verify. Evaluation results demonstrate the increased code coverage of ADRENALIN-RV with respect to other runtime validation tools for Android. Thanks to ADRENALIN-RV, we were able to detect violations that cannot be detected by other tools.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131234092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eduard Paul Enoiu, Daniel Sundmark, Adnan Causevic, P. Pettersson
{"title":"A Comparative Study of Manual and Automated Testing for Industrial Control Software","authors":"Eduard Paul Enoiu, Daniel Sundmark, Adnan Causevic, P. Pettersson","doi":"10.1109/ICST.2017.44","DOIUrl":"https://doi.org/10.1109/ICST.2017.44","url":null,"abstract":"Automated test generation has been suggested as a way of creating tests at a lower cost. Nonetheless, it is not very well studied how such tests compare to manually written ones in terms of cost and effectiveness. This is particularly true for industrial control software, where strict requirements on both specification-based testing and code coverage typically are met with rigorous manual testing. To address this issue, we conducted a case study in which we compared manually and automatically created tests. We used recently developed real-world industrial programs written in the IEC 61131-3, a popular programming language for developing industrial control systems using programmable logic controllers. The results show that automatically generated tests achieve similar code coverage as manually created tests, but in a fraction of the time (an average improvement of roughly 90%). We also found that the use of an automated test generation tool does not result in better fault detection in terms of mutation score compared to manual testing. Specifically, manual tests more effectively detect logical, timer and negation type of faults, compared to automatically generated tests. The results underscore the need to further study how manual testing is performed in industrial practice and the extent to which automated test generation can be used in the development of reliable systems.","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125690503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. S. Fowler, Madeline Cheah, S. Shaikh, J. Bryans
{"title":"Towards a Testbed for Automotive Cybersecurity","authors":"D. S. Fowler, Madeline Cheah, S. Shaikh, J. Bryans","doi":"10.1109/ICST.2017.62","DOIUrl":"https://doi.org/10.1109/ICST.2017.62","url":null,"abstract":"Modern automotive platforms are cyber-physical in nature and increasingly connected. Cybersecurity testing of such platforms is expensive and carries safety concerns, making it challenging to perform tests for vulnerabilities and refine test methodologies. We propose a testbed, built over a Controller Area Network (CAN) simulator, and validate it against a real-world demonstration of a weakness in a test vehicle using aftermarket On Board Diagnostic (OBD) scanners (dongles).","PeriodicalId":112258,"journal":{"name":"2017 IEEE International Conference on Software Testing, Verification and Validation (ICST)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129541657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}