2013 8th International Workshop on Automation of Software Test (AST)最新文献

筛选
英文 中文
An industry proof-of-concept demonstration of automated combinatorial test 自动化组合测试的工业概念验证演示
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595802
R. Bartholomew
{"title":"An industry proof-of-concept demonstration of automated combinatorial test","authors":"R. Bartholomew","doi":"10.1109/IWAST.2013.6595802","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595802","url":null,"abstract":"Studies have found that the largest single cost and schedule component of safety-critical, embedded system development is software rework: locating and fixing software defects found during test. In many such systems these defects are the result of interactions among no more than 6 variables, suggesting that 6-way combinatorial testing would be sufficient to trigger and detect them. The National Institute of Standards and Technology developed an approach to automatically generating, executing, and analyzing such tests. This paper describes an industry proof-of-concept demonstration of automated unit and integration testing using this approach. The goal was to see if it might cost-effectively reduce rework by reducing the number of software defects escaping into system test - if it was adequately accurate, scalable, mature, easy to learn, and easy to use and still was able to achieve the required level of structural coverage. Results were positive - e.g., 2775 test input vectors were generated in 6 seconds, expected outputs were generated in 60 minutes, and executing and analyzing them took 8 minutes. Tests detected all seeded defects and in the proof-of-concept demonstration achieved nearly 100% structural coverage.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126832889","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Vee@Cloud: The virtual test lab on the cloud Vee@Cloud:云上的虚拟测试实验室
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595785
Xiaoying Bai, Muyang Li, Xiaofei Huang, W. Tsai, J. Gao
{"title":"Vee@Cloud: The virtual test lab on the cloud","authors":"Xiaoying Bai, Muyang Li, Xiaofei Huang, W. Tsai, J. Gao","doi":"10.1109/IWAST.2013.6595785","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595785","url":null,"abstract":"Large-scale system testing is challenging. It usually requires large number of test cases, substantial resources, and geographical distributed usage scenarios. It is expensive to build the test environment and to achieve certain level of test confidence. To address the challenges, test systems need to be scalable in a cost-effective manner. TaaS (Testing-as-a-Service) promotes a Cloud-based testing architecture to provide online testing services following a pay-per-use business model. The paper introduces the research and implementation of a TaaS system called Vee@Cloud. It serves as a scalable virtual test lab built upon Cloud infrastructure services. The resource manager allocates Virtual Machine instances and deploy test tasks, from a pool of available resources across different Clouds. The workload generator simulates various workload patterns, especially for system with new architecture styles like Web 2.0 and big data processing. Vee@Cloud promotes continuous monitoring and evaluating of online services. The monitor collects real-time performance data and analyzes the data against SLA (Service Level Agreement). A proof-of-concept prototype system is built and some early experiments are exercised using public Cloud services.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129854656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Quantifying the complexity of dataflow testing 量化数据流测试的复杂性
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595804
G. Denaro, M. Pezzè, Mattia Vivanti
{"title":"Quantifying the complexity of dataflow testing","authors":"G. Denaro, M. Pezzè, Mattia Vivanti","doi":"10.1109/IWAST.2013.6595804","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595804","url":null,"abstract":"It is common belief that dataflow testing criteria are harder to satisfy than statement and branch coverage. As motivations, several researchers indicate the difficulty of finding test suites that exercise many dataflow relations and the increased impact of infeasible program paths on the maximum coverage rates that can be indeed obtained. Yet, although some examples are given in research papers, we lack data on the validity of these hypotheses. This paper presents an experiment with a large sample of object oriented classes and provides solid empirical evidence that dataflow coverage rates are steadily lower than statement and branch coverage rates, and that the uncovered dataflow elements do not generally depend on the feasibility of single statements.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128115847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Security testing of the communication among Android applications Android应用程序间通信的安全测试
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595792
Andrea Avancini, M. Ceccato
{"title":"Security testing of the communication among Android applications","authors":"Andrea Avancini, M. Ceccato","doi":"10.1109/IWAST.2013.6595792","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595792","url":null,"abstract":"An important reason behind the popularity of smartphones and tablets is the huge amount of available applications to download, to expand functionalities of the devices with brand new features. In fact, official stores provide a plethora of applications developed by third parties, for entertainment and business, most of which for free. However, confidential data (e.g., phone contacts, global GPS position, banking data and emails) could be disclosed by vulnerable applications. Sensitive applications should carefully validate exchanged data to avoid security problems. In this paper, we propose a novel testing approach to test communication among applications on mobile devices. We present a test case generation strategy and a testing adequacy criterion for Android applications. Our approach has been assessed on three widely used Android applications.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134198672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 28
Computation and visualization of cause-effect paths 因果路径的计算和可视化
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595805
Alpana Dubey, P. Murthy
{"title":"Computation and visualization of cause-effect paths","authors":"Alpana Dubey, P. Murthy","doi":"10.1109/IWAST.2013.6595805","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595805","url":null,"abstract":"Static analyzers detect possible run-time errors at compile-time and often employ data-flow analysis techniques to infer properties of programs. Usually, dataflow analysis tools report possible errors with line numbers in source code and leave the task of locating root causes of errors. This paper proposes a technique to aid developers in locating the root causes of statically identified run-time errors with the help of cause-effect paths. A cause effect path terminates at an erroneous statement and originates at the statement which is responsible for the error. We propose modifications to the classic data-flow analysis algorithm to compute cause-effect paths. We discuss different visualization modes in which cause-effect paths can be displayed. As a case study, we implemented a null pointer analyzer, with the additional capability of cause-effect path computation, using the Microsoft Phoenix framework. In addition, we propose a methodology to automatically generate an analyzer which computes cause-effect paths using a framework such as Microsoft Phoenix.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133701273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic generation of parallel unit tests 自动生成并行单元测试
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595789
Jochen Schimmel, Korbinian Molitorisz, A. Jannesari, W. Tichy
{"title":"Automatic generation of parallel unit tests","authors":"Jochen Schimmel, Korbinian Molitorisz, A. Jannesari, W. Tichy","doi":"10.1109/IWAST.2013.6595789","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595789","url":null,"abstract":"Multithreaded software is subject to data races. Currently available data race detectors report such errors to the developer, but consume large amounts of time and memory; many approaches are not applicable for large software projects. Unit tests containing fractions of the program lead to better results. We propose AutoRT, an approach to automatically generate parallel unit tests as target for data race detectors from existing programs. AutoRT uses the Single Static Multiple Dynamic (SSMD) analysis pattern to reduce complexity and can therefore be used efficiently even in large software projects. We evaluate AutoRT using Microsoft CHESS and show that with SSMD all 110 data races contained in our sample programs can be located.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128051061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Did we test our changes? Assessing alignment between tests and development in practice 我们测试我们的更改了吗?在实践中评估测试和开发之间的一致性
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595800
S. Eder, B. Hauptmann, Maximilian Junker, Elmar Jürgens, Rudolf Vaas, Karl-Heinz Prommer
{"title":"Did we test our changes? Assessing alignment between tests and development in practice","authors":"S. Eder, B. Hauptmann, Maximilian Junker, Elmar Jürgens, Rudolf Vaas, Karl-Heinz Prommer","doi":"10.1109/IWAST.2013.6595800","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595800","url":null,"abstract":"Testing and development are increasingly performed by different organizations, often in different countries and time zones. Since their distance complicates communication, close alignment between development and testing becomes increasingly challenging. Unfortunately, poor alignment between the two threatens to decrease test effectiveness or increases costs. In this paper, we propose a conceptually simple approach to assess test alignment by uncovering methods that were changed but never executed during testing. The paper's contribution is a large industrial case study that analyzes development changes, test service activity and field faults of an industrial business information system over 14 months. It demonstrates that the approach is suitable to produce meaningful data and supports test alignment in practice.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125398176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Which compiler optimization options should I use for detecting data races in multithreaded programs? 我应该使用哪个编译器优化选项来检测多线程程序中的数据竞争?
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595791
Changjiang Jia, W. Chan
{"title":"Which compiler optimization options should I use for detecting data races in multithreaded programs?","authors":"Changjiang Jia, W. Chan","doi":"10.1109/IWAST.2013.6595791","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595791","url":null,"abstract":"Different compiler optimization options may produce different versions of object code. To the best of our knowledge, existing studies on concurrency bug detection in the public literature have not reported the effects of different compiler optimization options on detection effectiveness. This paper reports a preliminary but the first study in the exploratory nature to investigate this aspect. The study examines the happened-before based predictive data race detection scenarios on four benchmarks from the PARSEC 3.0 suite compiled under six different GNU GCC optimization options. We observe from the data set that the same race detection technique may produce different sets of races or different detection probabilities under different optimization scenarios. Based on the observations, we formulate two hypotheses for future investigations.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129422460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
JIFFY: A framework for encompassing aspects in testing and debugging software JIFFY:一个包含测试和调试软件方面的框架
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595806
M. Asif, Y. R. Reddy
{"title":"JIFFY: A framework for encompassing aspects in testing and debugging software","authors":"M. Asif, Y. R. Reddy","doi":"10.1109/IWAST.2013.6595806","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595806","url":null,"abstract":"Aspect Oriented Programming (AOP) advocates the notion of aspects to encapsulate crosscutting concerns. A concern is a behavior in a computer program and is said to be crosscutting if the module(s) that address the behavior are scattered and tangled with other modules of the system. In this paper, we investigate the possibility of using AOP for software testing and non-invasive debugging. We have identified some crosscutting concerns like access control, logging, performance, tracing, etc., and developed a framework based on java bytecode instrumentation technique to inject these crosscutting concerns into the compiled code. The framework is available as a service, i.e., the service takes the required code as input and produces the changed code as output that contains the appropriate aspects. Thus, it is argued that the framework can be the basis for implementing the notion of “Enabling testing as a Service”.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131102411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Utilizing software reuse experience for automated test recommendation 利用软件重用经验进行自动化测试推荐
2013 8th International Workshop on Automation of Software Test (AST) Pub Date : 2013-05-18 DOI: 10.1109/IWAST.2013.6595799
Werner Janjic, C. Atkinson
{"title":"Utilizing software reuse experience for automated test recommendation","authors":"Werner Janjic, C. Atkinson","doi":"10.1109/IWAST.2013.6595799","DOIUrl":"https://doi.org/10.1109/IWAST.2013.6595799","url":null,"abstract":"The development of defect tests is still a very labour intensive process that demands a high-level of domain knowledge, concentration and problem awareness from software engineers. Any technology that can reduce the manual effort involved in this process therefore has the potential to significantly reduce software development costs and time consumption. An idea for achieving this is to reuse the knowledge bound up in already existing test cases, either directly or indirectly, to assist in the development of tests for new software components and systems. Although general software reuse has received a lot of attention in the past - both in academia and industry - previous research has focussed on the reuse and recommendation of existing software artifacts in the creation of new product code rather than on the recommendation of tests. In this paper we focus on the latter, and present a novel automated test recommendation approach that leverages lessons learned from traditional software reuse to proactively make test case suggestions while an engineer is developing tests. In contrast, most existing testing-assistance tools provide ex post assistance to test developers in the form of coverage assessments and test quality evaluations. Our goal is to create an automated, non-intrusive recommendation system for efficient software test development. In this paper we set out the basic strategy by which this can be achieved and present a prototypical implementation of our test recommender system for Eclipse.","PeriodicalId":291838,"journal":{"name":"2013 8th International Workshop on Automation of Software Test (AST)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128698253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信