{"title":"Evaluating the Effectiveness of Regression Test Suites for Extract Method Validation","authors":"Levi Gomes, Cassio Cordeiro, Everton L. G. Alves","doi":"10.1145/3559744.3559745","DOIUrl":null,"url":null,"abstract":"Refactoring edits aim to improve structural aspects of a system without changing its external behavior. However, while trying to perform a safe edit, a developer might introduce refactoring faults. To avoid refactoring faults, developers often use test suites to validate refactoring edits. However, depending on the quality of a test suite, its verdict may be misleading. In this work, we first present an empirical study that investigates the effectiveness of test suites (manually created and generated) for validating Extract Method refactoring faults. We found that manual suites detected 61,9% the injected faults, while generated suites detected only 46,7% (Randoop) and 55,8% (Evosuite). Then, we propose a new approach for evaluating the quality of a test suite for detecting refactoring faults. This approach is implemented by our prototype tool that focuses on two types of Extract Method faults. We demonstrate its applicability in a second empirical study that measured the quality of test suites from three different open-source projects.","PeriodicalId":187140,"journal":{"name":"Proceedings of the 7th Brazilian Symposium on Systematic and Automated Software Testing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th Brazilian Symposium on Systematic and Automated Software Testing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3559744.3559745","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Refactoring edits aim to improve structural aspects of a system without changing its external behavior. However, while trying to perform a safe edit, a developer might introduce refactoring faults. To avoid refactoring faults, developers often use test suites to validate refactoring edits. However, depending on the quality of a test suite, its verdict may be misleading. In this work, we first present an empirical study that investigates the effectiveness of test suites (manually created and generated) for validating Extract Method refactoring faults. We found that manual suites detected 61,9% the injected faults, while generated suites detected only 46,7% (Randoop) and 55,8% (Evosuite). Then, we propose a new approach for evaluating the quality of a test suite for detecting refactoring faults. This approach is implemented by our prototype tool that focuses on two types of Extract Method faults. We demonstrate its applicability in a second empirical study that measured the quality of test suites from three different open-source projects.