2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)最新文献

筛选
英文 中文
d(mu)Reg: A Path-Aware Mutation Analysis Guided Approach to Regression Testing d(mu)Reg:一种路径感知突变分析指导的回归测试方法
Chang-ai Sun, Cuiyang Fan, Zhen Wang, Huai Liu
{"title":"d(mu)Reg: A Path-Aware Mutation Analysis Guided Approach to Regression Testing","authors":"Chang-ai Sun, Cuiyang Fan, Zhen Wang, Huai Liu","doi":"10.1109/AST.2017.8","DOIUrl":"https://doi.org/10.1109/AST.2017.8","url":null,"abstract":"Regression testing re-runs some previously executed test cases, with the purpose of checking whether previously fixed faults have re-emerged and ensuring that the changes do not negatively affect the existing behaviors of the software under development. Today's software is rapidly developed and evolved, and thus it is critical to implement regression testing quickly and effectively. In this paper, we propose a novel technique for regression testing, based on a family of mutant selection strategies. The preliminary results show that the proposed technique can significantly improve the efficiency of different regression testing activities, including test case reduction and prioritization. Our work also makes it possible to develop a unified framework that effectively implements various activities in regression testing.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129573626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Toward Adaptive, Self-Aware Test Automation 走向自适应的、自我意识的测试自动化
Benedikt Eberhardinger, Axel Habermaier, W. Reif
{"title":"Toward Adaptive, Self-Aware Test Automation","authors":"Benedikt Eberhardinger, Axel Habermaier, W. Reif","doi":"10.1109/AST.2017.1","DOIUrl":"https://doi.org/10.1109/AST.2017.1","url":null,"abstract":"Software testing plays a major role for engineering future systems that become more and more ubiquitous and also more critical for every days life. In order to fulfill the high demand, test automation is needed as a keystone. However, test automation, as it is used today, is counting on scripting and capture-and-replay and is not able to keep up with autonomous and intelligent systems. Therefore, we ask for an adaptive and autonomous test automation and propose a model-based approach that enables self-awareness as well as awareness of the system under test which is used for automation of the test suites.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128142895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Transferring Software Testing Tools to Practice 将软件测试工具应用于实践
Tao Xie
{"title":"Transferring Software Testing Tools to Practice","authors":"Tao Xie","doi":"10.1109/AST.2017.10","DOIUrl":"https://doi.org/10.1109/AST.2017.10","url":null,"abstract":"Achieving successful technology adoption in practice has often been an important goal for both academic and industrial researchers. However, it is generally challenging to transfer research results into industrial products or into tools that are widely adopted. What are the key factors that lead to practical impact for a research project? This talk presents experiences and lessons learned in successfully transferring tools from two testing projects as collaborative efforts between the academia and industry. In the Pex project (research.microsoft.com/pex) [3], nearly a decade's collaborative efforts between Microsoft Research and academia have led to high-impact tools that are now shipped by Microsoft and adopted by the community. These tools include Fakes [2], a test isolation framework shipped with Visual Studio 2012/2013, IntelliTest, an automatic test generation tool shipped with Visual Studio 2015, and Code Hunt (www.codehunt.com) [1] (evolved from Pex4Fun [4]), a popular serious gaming platform for coding contests and practicing programming skills, which has attracted 350,000+ players from May 2014 to August 2016, and has been adopted in large-scale Microsoft Imagine Cup and Beauty of Programming contests. In the WeChat testing project, recent collaborative efforts [5], [6] between Tencent and academia have developed effective techniques for testing Android apps, by improving Google's Monkey, a popularly used Android testing tool in industry. The developed techniques have been applied to test WeChat, one of world's most popular messenger apps with over 800 million monthly active users.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"383 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133363082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Analyzing Automatic Test Generation Tools for Refactoring Validation 分析用于重构验证的自动测试生成工具
I. C. S. Silva, Everton L. G. Alves, W. Andrade
{"title":"Analyzing Automatic Test Generation Tools for Refactoring Validation","authors":"I. C. S. Silva, Everton L. G. Alves, W. Andrade","doi":"10.1109/AST.2017.9","DOIUrl":"https://doi.org/10.1109/AST.2017.9","url":null,"abstract":"Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125264306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
High-Coverage Testing of Navigation Models in Android Applications 高覆盖率测试安卓应用中的导航模型
Fernando Paulovsky, Esteban Pavese, D. Garbervetsky
{"title":"High-Coverage Testing of Navigation Models in Android Applications","authors":"Fernando Paulovsky, Esteban Pavese, D. Garbervetsky","doi":"10.1109/AST.2017.6","DOIUrl":"https://doi.org/10.1109/AST.2017.6","url":null,"abstract":"In this work, we present a tool that systematically discovers and tests the user-observable states of an Android application. We define an appropriate notion of test coverage, and we show the tool's potential by applying it to several publicly available applications.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129960814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tree Preprocessing and Test Outcome Caching for Efficient Hierarchical Delta Debugging 树预处理和测试结果缓存用于高效的分层增量调试
Renáta Hodován, Ákos Kiss, T. Gyimóthy
{"title":"Tree Preprocessing and Test Outcome Caching for Efficient Hierarchical Delta Debugging","authors":"Renáta Hodován, Ákos Kiss, T. Gyimóthy","doi":"10.1109/AST.2017.4","DOIUrl":"https://doi.org/10.1109/AST.2017.4","url":null,"abstract":"Test case reduction has been automated since the introduction of the minimizing Delta Debugging algorithm, but improving the efficiency of reduction is still the focus of research. This paper focuses on Hierarchical Delta Debugging, already an improvement over the original technique, and describes how its input tree and caching approach can be changed for higher efficiency. The proposed optimizations were evaluated on artificial and real test cases of 6 different input formats, and achieved an average 45% drop in the number of testing steps needed to reach the minimized results - with the best improvement being as high as 82%, giving a more than 5-fold speedup.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117133288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Gamification of Software Testing 软件测试的游戏化
G. Fraser
{"title":"Gamification of Software Testing","authors":"G. Fraser","doi":"10.1109/AST.2017.20","DOIUrl":"https://doi.org/10.1109/AST.2017.20","url":null,"abstract":"Writing good software tests is difficult, not everysoftware developer’s favorite occupation, and not a prominentaspect in programming education. However, human involvementin testing is unavoidable: What makes a test good is oftendown to intuition; what makes a test useful depends on anunderstanding of the program context; what makes a test findbugs depends on understanding the intended program behaviour.Because the consequences of insufficient testing can be dire, thispaper explores a new angle to address the testing problem:Gamification is the approach of converting potentially tediousor boring tasks to components of entertaining gameplay, wherethe competitive nature of humans motivates them to competeand excel. By applying gamification concepts to software testing,there is potential to fundamentally change software testing inseveral ways: First, gamification can help to overcome deficienciesin education, where testing is a highly neglected topic. Second,gamification engages practitioners in testing tasks they wouldotherwise neglect, and gets them to use advanced testing toolsand techniques they would otherwise not consider. Finally, gamificationmakes it possible to crowdsource complex testing tasksthrough games with a purpose. Collectively, these applications ofgamification have the potential to substantially improve softwaretesting practice, and thus software quality.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131131906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 33
Generating Unit Tests with Structured System Interactions 使用结构化系统交互生成单元测试
Nikolas Havrikov, Alessio Gambi, A. Zeller, Andrea Arcuri, Juan P. Galeotti
{"title":"Generating Unit Tests with Structured System Interactions","authors":"Nikolas Havrikov, Alessio Gambi, A. Zeller, Andrea Arcuri, Juan P. Galeotti","doi":"10.1109/AST.2017.2","DOIUrl":"https://doi.org/10.1109/AST.2017.2","url":null,"abstract":"There is a large body of work in the literature about automatic unit tests generation, and many successful results have been reported so far. However, current approaches target library classes, but not full applications. A major obstacle for testing full applications is that they interact with the environment. For example, they access files on the hard drive or establish connections to remote servers. Thoroughly testing such applications requires tests that completely control the interactions between the application and its environment. Recent techniques based on mocking enable the generation of tests which include environment interactions, however, generating the right type of interactions is still an open problem. In this paper, we describe a novel approach which addresses this problem by enhancing search-based testing with complex test data generation. Experiments on an artificial system show that the proposed approach can generate effective unit tests. Compared with current techniques based on mocking, we generate more robust unit tests which achieve higher coverage and are, arguably, easier to read and understand.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129000293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Efficient Product-Line Testing Using Cluster-Based Product Prioritization 使用基于集群的产品优先级进行高效产品线测试
Mustafa Al-Hajjaji, J. Krüger, Sandro Schulze, Thomas Leich, G. Saake
{"title":"Efficient Product-Line Testing Using Cluster-Based Product Prioritization","authors":"Mustafa Al-Hajjaji, J. Krüger, Sandro Schulze, Thomas Leich, G. Saake","doi":"10.1109/AST.2017.7","DOIUrl":"https://doi.org/10.1109/AST.2017.7","url":null,"abstract":"A software product-line comprises a set of products that share a common set of features. These features can be reused to customize a product to satisfy specific needs of certain customers or markets. As the number of possible products increases exponentially for new features, testing all products is infeasible. Existing testing approaches reduce their effort by restricting the number of products (sampling) and improve their effectiveness by considering the order of tests (prioritization). In this paper, we propose a cluster-based prioritization technique to sample similar products with respect to the feature selection. We evaluate our approach using feature models of different sizes and show that cluster-based prioritization can enhance the effectiveness of product-line testing.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129958013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Supporting Agile Teams with a Test Analytics Platform: A Case Study 用测试分析平台支持敏捷团队:一个案例研究
O. Liechti, J. Pasquier-Rocha, R. Reis
{"title":"Supporting Agile Teams with a Test Analytics Platform: A Case Study","authors":"O. Liechti, J. Pasquier-Rocha, R. Reis","doi":"10.1109/AST.2017.3","DOIUrl":"https://doi.org/10.1109/AST.2017.3","url":null,"abstract":"Continuous improvement, feedback mechanisms and automated testing are cornerstones of agile methods. We introduce the concept of test analytics, which brings these three practices together. We illustrate the concept with an industrial case study and describe the experiments run by a team who had set a goal for itself to get better at testing. Beyond technical aspects, we explain how these experiments have changed the mindset and the behaviour of the team members. We then present an open source test analytics platform, later developed to share the positive learnings with the community. We describe the platform features and architecture and explain how it can be easily put to use. Before the conclusions, we explain how test analytics fits in the broader context of software analytics and present our ideas for future work.","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134257752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信