{"title":"Reliability assessment for distributed systems via communication abstraction and refinement","authors":"Lin Gui, Jun Sun, Yang Liu, J. Dong","doi":"10.1145/2771783.2771794","DOIUrl":"https://doi.org/10.1145/2771783.2771794","url":null,"abstract":"Distributed systems like cloud-based services are ever more popular. Assessing the reliability of distributed systems is highly non-trivial. Particularly, the order of executions among distributed components adds a dimension of non-determinism, which invalidates existing reliability assessment methods based on Markov chains. Probabilistic model checking based on models like Markov decision processes is designed to deal with scenarios involving both probabilistic behavior (e.g., reliabilities of system components) and non-determinism. However, its application is currently limited by state space explosion, which makes reliability assessment of distributed system particularly difficult. In this work, we improve the probabilistic model checking through a method of abstraction and reduction, which controls the communications among system components and actively reduces the size of each component. We prove the soundness and completeness of the proposed approach. Through an implementation in a software toolkit and evaluations with several systems, we show that our approach often reduces the size of the state space by several orders of magnitude, while still producing sound and accurate assessment.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127291074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automating performance bottleneck detection using search-based application profiling","authors":"Du Shen, Qi Luo, D. Poshyvanyk, M. Grechanik","doi":"10.1145/2771783.2771816","DOIUrl":"https://doi.org/10.1145/2771783.2771816","url":null,"abstract":"Application profiling is an important performance analysis technique, when an application under test is analyzed dynamically to determine its space and time complexities and the usage of its instructions. A big and important challenge is to profile nontrivial web applications with large numbers of combinations of their input parameter values. Identifying and understanding particular subsets of inputs leading to performance bottlenecks is mostly manual, intellectually intensive and laborious procedure. We propose a novel approach for automating performance bottleneck detection using search-based input-sensitive application profiling. Our key idea is to use a genetic algorithm as a search heuristic for obtaining combinations of input parameter values that maximizes a fitness function that represents the elapsed execution time of the application. We implemented our approach, coined as Genetic Algorithm-driven Profiler (GA-Prof) that combines a search-based heuristic with contrast data mining of execution traces to accurately determine performance bottlenecks. We evaluated GA-Prof to determine how effectively and efficiently it can detect injected performance bottlenecks into three popular open source web applications. Our results demonstrate that GA-Prof efficiently explores a large space of input value combinations while automatically and accurately detecting performance bottlenecks, thus suggesting that it is effective for automatic profiling.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"1102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116051492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scalable program analysis through proof caching (doctoral symposium)","authors":"Andrea Aquino","doi":"10.1145/2771783.2784773","DOIUrl":"https://doi.org/10.1145/2771783.2784773","url":null,"abstract":"Despite the remarkable advances attained by the SMT community in the last decade, solving complex formulas still represents the main bottleneck to the scalability of program analysis techniques. Recent research work has shown that formulas generated during program analysis recur, and such redundancy can be captured and exploited by means of caching frameworks to avoid repeating complex queries to solvers. Although current approaches show that reusing formulas syntactically can indeed reduce the impact of SMT solvers on program analysis, they still suffer from being logic-dependent, and performing poorly on huge sets of heterogenous formulas. The core idea of our approach is to go beyond merely syntactical caching frameworks by designing a caching framework that is able to reuse proofs instead of formulas. In fact, even formulas that are syntactically different can share solutions. We aim to study the recurrence of proofs across heterogeneous formulas, and to define a technique to efficiently retrieve such proofs. We plan to exploit a suitable distance function that measures the amount of proofs shared by two formulas to allow the efficient retrieval of candidate proofs within a potentially large space of proofs. In this paper, we present the problem, draft the core idea, discuss the early results and present our research plans.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122221221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Test-case generation for runtime analysis and vice versa: verification of aircraft separation assurance","authors":"Marko Dimjasevic, D. Giannakopoulou","doi":"10.1145/2771783.2771804","DOIUrl":"https://doi.org/10.1145/2771783.2771804","url":null,"abstract":"This paper addresses the problem of specifying properties of aircraft separation assurance software, and verifying these properties at runtime. In particular, we target AutoResolver, a large, complex air-traffic control system that predicts and resolves aircraft loss of separation. In previous work, we developed a light-weight testing environment for AutoResolver. Our work contributed a wrapper around AutoResolver, which enabled the automated generation and fast execution of hundreds of thousands of tests. The focus of the work presented here is in specifying requirements for AutoResolver, in ensuring the generation of test cases that cover these requirements, and in developing a runtime infrastructure for automatically checking the requirements. Such infrastructure must be completely transparent to the AutoResolver code base. Our work combines test-case generation and runtime verification in innovative ways in order to address these challenges. The paper includes a detailed evaluation and discussion of our verification effort.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124769235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wei Huang, Yao Dong, Ana L. Milanova, Julian T Dolby
{"title":"Scalable and precise taint analysis for Android","authors":"Wei Huang, Yao Dong, Ana L. Milanova, Julian T Dolby","doi":"10.1145/2771783.2771803","DOIUrl":"https://doi.org/10.1145/2771783.2771803","url":null,"abstract":"We propose a type-based taint analysis for Android. Concretely, we present DFlow, a context-sensitive information flow type system, and DroidInfer, the corresponding type inference analysis for detecting privacy leaks in Android apps. We present novel techniques for error reporting based on CFL-reachability, as well as novel techniques for handling of Android-specific features, including libraries, multiple entry points and callbacks, and inter-component communication. Empirical results show that our approach is scalable and precise. DroidInfer scales well in terms of time and memory and has false-positive rate of 15.7%. It detects privacy leaks in apps from the Google Play Store and in known malware.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130304277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mining change history for test-plan generation (doctoral symposium)","authors":"Thomas Rolfsnes","doi":"10.1145/2771783.2784776","DOIUrl":"https://doi.org/10.1145/2771783.2784776","url":null,"abstract":"Regression testing is an essential step in safeguarding the evolution of a system, yet there is often not enough time to exercise all available tests. Identifying the subset of tests that can reveal potential issues introduced by a change is a challenge. It requires identifying the tests that test the chan- ged part of the software. Furthermore, and more challeng- ing, it requires identifying the parts of the system that are potentially affected by that change, a task typically done by means of static program analysis. In this doctoral research, we investigate an alternative approach, using software repos- itory mining. We propose a method that mines the change history of a system to uncover dependencies, and uses these for test-selection and test-prioritization. By reducing the amount of test to exercise, and limiting time spend on test- plan creation (i.e., selecting and prioritizing tests), the aim of our approach is to increase cost-effectiveness of software regression testing.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123860800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Practical regression test selection with dynamic file dependencies","authors":"Miloš Gligorić, Lamyaa Eloussi, D. Marinov","doi":"10.1145/2771783.2771784","DOIUrl":"https://doi.org/10.1145/2771783.2771784","url":null,"abstract":"Regression testing is important but can be time-intensive. One approach to speed it up is regression test selection (RTS), which runs only a subset of tests. RTS was proposed over three decades ago but has not been widely adopted in practice. Meanwhile, testing frameworks, such as JUnit, are widely adopted and well integrated with many popular build systems. Hence, integrating RTS in a testing framework already used by many projects would increase the likelihood that RTS is adopted. We propose a new, lightweight RTS technique, called Ekstazi, that can integrate well with testing frameworks. Ekstazi tracks dynamic dependencies of tests on files, and unlike most prior RTS techniques, Ekstazi requires no integration with version-control systems. We implemented Ekstazi for Java and JUnit, and evaluated it on 615 revisions of 32 open-source projects (totaling almost 5M LOC) with shorter- and longer-running test suites. The results show that Ekstazi reduced the end-to-end testing time 32% on average, and 54% for longer-running test suites, compared to executing all tests. Ekstazi also has lower end-to-end time than the existing techniques, despite the fact that it selects more tests. Ekstazi has been adopted by several popular open source projects, including Apache Camel, Apache Commons Math, and Apache CXF.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121909344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tian Huat Tan, Yinxing Xue, Manman Chen, Jun Sun, Yang Liu, J. Dong
{"title":"Optimizing selection of competing features via feedback-directed evolutionary algorithms","authors":"Tian Huat Tan, Yinxing Xue, Manman Chen, Jun Sun, Yang Liu, J. Dong","doi":"10.1145/2771783.2771808","DOIUrl":"https://doi.org/10.1145/2771783.2771808","url":null,"abstract":"Software that support various groups of customers usually require complicated configurations to attain different functionalities. To model the configuration options, feature model is proposed to capture the commonalities and competing variabilities of the product variants in software family or Software Product Line (SPL). A key challenge for deriving a new product is to find a set of features that do not have inconsistencies or conflicts, yet optimize multiple objectives (e.g., minimizing cost and maximizing number of features), which are often competing with each other. Existing works have attempted to make use of evolutionary algorithms (EAs) to address this problem. In this work, we incorporated a novel feedback-directed mechanism into existing EAs. Our empirical results have shown that our method has improved noticeably over all unguided version of EAs on the optimal feature selection. In particular, for case studies in SPLOT and LVAT repositories, the feedback-directed Indicator-Based EA (IBEA) has increased the number of correct solutions found by 72.33% and 75%, compared to unguided IBEA. In addition, by leveraging a pre-computed solution, we have found 34 sound solutions for Linux X86, which contains 6888 features, in less than 40 seconds.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122145270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic taint tracking for Java with phosphor (demo)","authors":"Jonathan Bell, G. Kaiser","doi":"10.1145/2771783.2784768","DOIUrl":"https://doi.org/10.1145/2771783.2784768","url":null,"abstract":"Dynamic taint tracking is an information flow analysis that can be applied to many areas of testing. Phosphor is the first portable, accurate and performant dynamic taint tracking system for Java. While previous systems for performing general-purpose taint tracking in the JVM required specialized research JVMs, Phosphor works with standard off-the-shelf JVMs (such as Oracle's HotSpot and OpenJDK's IcedTea). Phosphor also differs from previous portable JVM taint tracking systems that were not general purpose (e.g. tracked only tags on Strings and no other type), in that it tracks tags on all variables. We have also made several enhancements to Phosphor, to track taint tags through control flow (in addition to data flow), as well as to track an arbitrary number of relationships between taint tags (rather than be limited to only 32 tags). In this demonstration, we show how developers writing testing tools can benefit from Phosphor, and explain briefly how to interact with it.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"316 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124481236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrea Aquino, F. A. Bianchi, Meixian Chen, G. Denaro, M. Pezzè
{"title":"Reusing constraint proofs in program analysis","authors":"Andrea Aquino, F. A. Bianchi, Meixian Chen, G. Denaro, M. Pezzè","doi":"10.1145/2771783.2771802","DOIUrl":"https://doi.org/10.1145/2771783.2771802","url":null,"abstract":"Symbolic analysis techniques have largely improved over the years, and are now approaching an industrial maturity level. One of the main limitations to the scalability of symbolic analysis is the impact of constraint solving that is still a relevant bottleneck for the applicability of symbolic techniques, despite the dramatic improvements of the last decades. In this paper we discuss a novel approach to deal with the constraint solving bottleneck. Starting from the observation that constraints may recur during the analysis of the same as well as different programs, we investigate the advantages of complementing constraint solving with searching for the satisfiability proof of a constraint in a repository of constraint proofs. We extend recent proposals with powerful simplifications and an original canonical form of the constraints that reduce syntactically different albeit equivalent constraints to the same form, and thus facilitate the search for equivalent constraints in large repositories. The experimental results we attained indicate that the proposed approach improves over both similar solutions and state of the art constraint solvers.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125085234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}