{"title":"Analyzing Automatic Test Generation Tools for Refactoring Validation","authors":"I. C. S. Silva, Everton L. G. Alves, W. Andrade","doi":"10.1109/AST.2017.9","DOIUrl":null,"url":null,"abstract":"Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).","PeriodicalId":141557,"journal":{"name":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 IEEE/ACM 12th International Workshop on Automation of Software Testing (AST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AST.2017.9","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10
Abstract
Refactoring edits are very common during agile development. Due to their inherent complexity, refactorings are know to be error prone. In this sense, refactoring edits require validation to check whether no behavior change was introduced. A valid way for validating refactorings is the use of automatically generated regression test suites. However, although popular, it is not certain whether the tools for generating tests (e.g., Randoop and EvoSuite) are in fact suitable in this context. This paper presents an exploratory study that investigated the efficiency of suites generated by automatic tools regarding their capacity of detecting refactoring faults. Our results show that both Randoop and EvoSuite suites missed more than 50% of all injected faults. Moreover, their suites include a great number of tests that could not be run integrally after the edits (obsolete test cases).