Proceedings of the 2015 International Symposium on Software Testing and Analysis最新文献

筛选
英文 中文
Pegasus: automatic barrier inference for stable multithreaded systems Pegasus:用于稳定多线程系统的自动屏障推理
Monika Dhok, Rashmi Mudduluru, M. Ramanathan
{"title":"Pegasus: automatic barrier inference for stable multithreaded systems","authors":"Monika Dhok, Rashmi Mudduluru, M. Ramanathan","doi":"10.1145/2771783.2771813","DOIUrl":"https://doi.org/10.1145/2771783.2771813","url":null,"abstract":"Deterministic multithreaded systems (DMTs) are designed to ensure reproducibility of program behavior for a given input. In these systems, even minor changes to the code (or input) can perturb the schedule. This increases the number of feasible schedules making reasoning about these programs harder. Stable multithreaded systems (StableMTs) address the problem such that a schedule is unaffected by minor changes. Unfortunately, determinism in these systems can potentially serialize the execution imposing a significant penalty on the performance. Programmer hints in the form of soft barriers attempt to eliminate the performance bottlenecks. However, the process is arduous, error-prone and requires manual intervention to reconsider the location of the barrier for every modification to the source code. In this paper, we propose an effective approach to automate the task of adding soft barriers in the source code. Our approach analyzes the deterministic program executions to extract the program and semantic order of executions and builds a weighted constraint graph. Using this graph, a schedule is synthesized which is used to identify bottlenecks and insert soft barriers in the program source. We validate our implementation, named PEGASUS, by applying it on multiple benchmarks. Our experimental results demonstrate that we are able to reduce the overall execution time of programs by upto 34% when compared to the execution time where barriers are inserted manually. Moreover, we observe a performance improvement ranging from 38% to 88% as compared to programs without barriers. Our experimental results show that adapting PEGASUS to infer barriers for multiple versions of a source program is seamless. The memory and time overheads associated with the usage of PEGASUS is negligible making it a vital cog in building scalable StableMTs.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116365650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
RTCM: a natural language based, automated, and practical test case generation framework RTCM:一种基于自然语言的、自动化的、实用的测试用例生成框架
T. Yue, Shaukat Ali, Man Zhang
{"title":"RTCM: a natural language based, automated, and practical test case generation framework","authors":"T. Yue, Shaukat Ali, Man Zhang","doi":"10.1145/2771783.2771799","DOIUrl":"https://doi.org/10.1145/2771783.2771799","url":null,"abstract":"Based on our experience of collaborating with industry, we observed that test case generation usually relies on test case specifications (TCSs), commonly written in natural language, specifying test cases of a System Under Test at a high level of abstraction. In practice, TCSs are commonly used by test engineers as reference documents to perform these activities: 1) Manually executing test cases in TCSs; 2) Manually coding test cases in a test scripting language for automated test case execution. In the latter case, the gap between TCSs and executable test cases has to be filled by test engineers, requiring a significant amount of coding effort and domain knowledge. Motivated by the above observations from the industry, we first propose, in this paper, a TCS language, named as Restricted Test Case Modeling (RTCM), which is based on natural language and composed of an easy-to-use template, a set of restriction rules and keywords. Second, we propose a test case generation tool (aToucan4Test), which takes TCSs in RTCM as input and generates either manual test cases or automatically executable test cases, based on various coverage criteria defined on RTCM. To assess the applicability of RTCM, we manually modeled two industrial case studies and examined 30 automatically generated TCSs. To evaluate aToucan4Test, we modeled three subsystems of a Video Conferencing System developed by Cisco Systems, Norway and automatically generated executable test cases. These test cases were successfully executed on two commercial software versions. In the paper, we also discuss our experience of applying RTCM and aToucan4Test in an industrial context and compare our approach with other model-based testing methodologies.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129013735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Dynamic detection of inter-application communication vulnerabilities in Android Android应用间通信漏洞动态检测
R. Hay, Omer Tripp, Marco Pistoia
{"title":"Dynamic detection of inter-application communication vulnerabilities in Android","authors":"R. Hay, Omer Tripp, Marco Pistoia","doi":"10.1145/2771783.2771800","DOIUrl":"https://doi.org/10.1145/2771783.2771800","url":null,"abstract":"A main aspect of the Android platform is Inter-Application Communication (IAC), which enables reuse of functionality across apps and app components via message passing. While a powerful feature, IAC also constitutes a serious attack surface. A malicious app can embed a payload into an IAC message, thereby driving the recipient app into a potentially vulnerable behavior if the message is processed without its fields first being sanitized or validated. We present what to our knowledge is the first comprehensive testing algorithm for Android IAC vulnerabilities. Toward this end, we first describe a catalog, stemming from our field experience, of 8 concrete vulnerability types that can potentially arise due to unsafe handling of incoming IAC messages. We then explain the main challenges that automated discovery of Android IAC vulnerabilities entails, including in particular path coverage and custom data fields, and present simple yet surprisingly effective solutions to these challenges. We have realized our testing approach as the IntentDroid system, which is available as a commercial cloud service. IntentDroid utilizes lightweight platform-level instrumentation, implemented via debug breakpoints (to run atop any Android device without any setup or customization), to recover IAC-relevant app-level behaviors. Evaluation of IntentDroid over a set of 80 top-popular apps has revealed a total 150 IAC vulnerabilities — some already fixed by the developers following our report — with a recall rate of 92% w.r.t. a ground truth established via manual auditing by a security expert.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133439792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Feedback-controlled random test generation 反馈控制随机测试生成
Kohsuke Yatoh, Kazunori Sakamoto, F. Ishikawa, S. Honiden
{"title":"Feedback-controlled random test generation","authors":"Kohsuke Yatoh, Kazunori Sakamoto, F. Ishikawa, S. Honiden","doi":"10.1145/2771783.2771805","DOIUrl":"https://doi.org/10.1145/2771783.2771805","url":null,"abstract":"Feedback-directed random test generation is a widely used technique to generate random method sequences. It leverages feedback to guide generation. However, the validity of feedback guidance has not been challenged yet. In this paper, we investigate the characteristics of feedback-directed random test generation and propose a method that exploits the obtained knowledge that excessive feedback limits the diversity of tests. First, we show that the feedback loop of feedback-directed generation algorithm is a positive feedback loop and amplifies the bias that emerges in the candidate value pool. This over-directs the generation and limits the diversity of generated tests. Thus, limiting the amount of feedback can improve diversity and effectiveness of generated tests. Second, we propose a method named feedback-controlled random test generation, which aggressively controls the feedback in order to promote diversity of generated tests. Experiments on eight different, real-world application libraries indicate that our method increases branch coverage by 78% to 204% over the original feedback-directed algorithm on large-scale utility libraries.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124566710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Automatic generation of system test cases from use case specifications 从用例说明中自动生成系统测试用例
Chunhui Wang, F. Pastore, Arda Goknil, L. Briand, Muhammad Zohaib Z. Iqbal
{"title":"Automatic generation of system test cases from use case specifications","authors":"Chunhui Wang, F. Pastore, Arda Goknil, L. Briand, Muhammad Zohaib Z. Iqbal","doi":"10.1145/2771783.2771812","DOIUrl":"https://doi.org/10.1145/2771783.2771812","url":null,"abstract":"In safety critical domains, system test cases are often derived from functional requirements in natural language (NL) and traceability between requirements and their corresponding test cases is usually mandatory. The definition of test cases is therefore time-consuming and error prone, especially so given the quickly rising complexity of embedded systems in many critical domains. Though considerable research has been devoted to automatic generation of system test cases from NL requirements, most of the proposed approaches re- quire significant manual intervention or additional, complex behavioral modelling. This significantly hinders their applicability in practice. In this paper, we propose Use Case Modelling for System Tests Generation (UMTG), an approach that automatically generates executable system test cases from use case spec- ifications and a domain model, the latter including a class diagram and constraints. Our rationale and motivation are that, in many environments, including that of our industry partner in the reported case study, both use case specifica- tions and domain modelling are common and accepted prac- tice, whereas behavioural modelling is considered a difficult and expensive exercise if it is to be complete and precise. In order to extract behavioral information from use cases and enable test automation, UMTG employs Natural Language Processing (NLP), a restricted form of use case specifica- tions, and constraint solving.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127815845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Collaborative testing across shared software components (doctoral symposium) 跨共享软件组件的协作测试(博士研讨会)
Teng Long
{"title":"Collaborative testing across shared software components (doctoral symposium)","authors":"Teng Long","doi":"10.1145/2771783.2784774","DOIUrl":"https://doi.org/10.1145/2771783.2784774","url":null,"abstract":"Components of numerous component-based software systems are often developed and tested by multiple stakeholders, and there often exists significant overlap and synergy in the processes of testing systems with shared components. This work demonstrates that testers of shared software components can save effort by avoiding redundant work, and improve the test quality of each component as well as overall software systems by using information obtained when testing across multiple components. My research contains three parts. First, I investigate current testing practices for component-based software systems to find the overlap and synergy we conjecture exists. Second, I design and implement infrastructure and related tools to facilitate communication and data sharing between testers. Third, I design various testing processes to implement different collaborative testing algorithms and apply them to large actively developed software systems.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117094315","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Cost-aware combinatorial interaction testing (doctoral symposium) 成本意识组合交互测试(博士研讨会)
Gulsen Demiroz
{"title":"Cost-aware combinatorial interaction testing (doctoral symposium)","authors":"Gulsen Demiroz","doi":"10.1145/2771783.2784775","DOIUrl":"https://doi.org/10.1145/2771783.2784775","url":null,"abstract":"The configuration spaces of software systems are often too large to test exhaustively. Combinatorial interaction testing approaches, such as covering arrays, systematically sample the configuration space and test only the selected configurations. Traditional t-way covering arrays aim to cover all t-way combinations of option settings in a minimum number of configurations. By doing so, they assume that the testing cost of a configuration is the same for all configurations. In my thesis work, we however argue that, in practice, the actual testing cost may differ from one configuration to another and that accounting for these differences can improve the cost-effectiveness of covering arrays. To this end, we introduced a new novel combinatorial object, called a cost-aware covering array where a t-way cost-aware covering array is a t-way covering array that minimizes a given cost function. As part of progress, we developed an algorithm for a simple, yet important scenario, and the results of our empirical studies suggest that cost-aware covering arrays can greatly reduce the actual cost of testing compared to traditional covering arrays. We also defined a framework for defining the cost function but then we observed that manually creating these cost models is impractical. Hence our first future goal is to develop an approach for automatically discovering cost models for complex configuration spaces. Our second future goal is then to develop algorithms to generate cost-aware covering arrays for more general cost scenarios. Our focus is currently on meta-heuristic search algorithms such as simulated annealing and genetic algorithms to construct cost-aware covering arrays. Another goal is to expand the cost framework to be test-case aware where not every test case is valid for a configuration, hence the cost of running the test suite is actually different for each configuration.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"355 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122799534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Automated unit test generation during software development: a controlled experiment and think-aloud observations 软件开发过程中的自动化单元测试生成:受控实验和有声思考观察
J. Rojas, G. Fraser, Andrea Arcuri
{"title":"Automated unit test generation during software development: a controlled experiment and think-aloud observations","authors":"J. Rojas, G. Fraser, Andrea Arcuri","doi":"10.1145/2771783.2771801","DOIUrl":"https://doi.org/10.1145/2771783.2771801","url":null,"abstract":"Automated unit test generation tools can produce tests that are superior to manually written ones in terms of code coverage, but are these tests helpful to developers while they are writing code? A developer would first need to know when and how to apply such a tool, and would then need to understand the resulting tests in order to provide test oracles and to diagnose and fix any faults that the tests reveal. Considering all this, does automatically generating unit tests provide any benefit over simply writing unit tests manually? We empirically investigated the effects of using an automated unit test generation tool (EvoSuite) during development. A controlled experiment with 41 students shows that using EvoSuite leads to an average branch coverage increase of +13%, and 36% less time is spent on testing compared to writing unit tests manually. However, there is no clear effect on the quality of the implementations, as it depends on how the test generation tool and the generated tests are used. In-depth analysis, using five think-aloud observations with professional programmers, confirms the necessity to increase the usability of automated unit test generation tools, to integrate them better during software development, and to educate software developers on how to best use those tools.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130264209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 57
Automatic fault injection for driver robustness testing 自动故障注入驱动鲁棒性测试
Kai Cong, Li Lei, Zhenkun Yang, Fei Xie
{"title":"Automatic fault injection for driver robustness testing","authors":"Kai Cong, Li Lei, Zhenkun Yang, Fei Xie","doi":"10.1145/2771783.2771811","DOIUrl":"https://doi.org/10.1145/2771783.2771811","url":null,"abstract":"Robustness testing is a crucial stage in the device driver development cycle. To accelerate driver robustness testing, effective fault scenarios need to be generated and injected without requiring much time and human effort. In this paper, we present a practical approach to automatic runtime generation and injection of fault scenarios for driver robustness testing. We identify target functions that can fail from runtime execution traces, generate effective fault scenarios on these target functions using a bounded trace-based iterative strategy, and inject the generated fault scenarios at runtime to test driver robustness using a permutation-based injection mechanism. We have evaluated our approach on 12 Linux device drivers and found 28 severe bugs. All these bugs have been further validated via manual fault injection. The results demonstrate that our approach is useful and efficient in generating fault scenarios for driver robustness testing with little manual effort.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
BrowserAudit: automated testing of browser security features BrowserAudit:自动测试浏览器的安全特性
Charlie Hothersall-Thomas, S. Maffeis, Chris Novakovic
{"title":"BrowserAudit: automated testing of browser security features","authors":"Charlie Hothersall-Thomas, S. Maffeis, Chris Novakovic","doi":"10.1145/2771783.2771789","DOIUrl":"https://doi.org/10.1145/2771783.2771789","url":null,"abstract":"The security of the client side of a web application relies on browser features such as cookies, the same-origin policy and HTTPS. As the client side grows increasingly powerful and sophisticated, browser vendors have stepped up their offering of security mechanisms which can be leveraged to protect it. These are often introduced experimentally and informally and, as adoption increases, gradually become standardised (e.g., CSP, CORS and HSTS). Considering the diverse landscape of browser vendors, releases, and customised versions for mobile and embedded devices, there is a compelling need for a systematic assessment of browser security. We present BrowserAudit, a tool for testing that a deployed browser enforces the guarantees implied by the main standardised and experimental security mechanisms. It includes more than 400 fully-automated tests that exercise a broad range of security features, helping web users, application developers and security researchers to make an informed security assessment of a deployed browser. We validate BrowserAudit by discovering both fresh and known security-related bugs in major browsers.","PeriodicalId":264859,"journal":{"name":"Proceedings of the 2015 International Symposium on Software Testing and Analysis","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127172761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信