Yutaro Kashiwa, Kazuki Shimizu, Bin Lin, G. Bavota, Michele Lanza, Yasutaka Kamei, Naoyasu Ubayashi
{"title":"重构会破坏测试吗?破坏到什么程度?","authors":"Yutaro Kashiwa, Kazuki Shimizu, Bin Lin, G. Bavota, Michele Lanza, Yasutaka Kamei, Naoyasu Ubayashi","doi":"10.26226/morressier.613b5419842293c031b5b63f","DOIUrl":null,"url":null,"abstract":"Refactoring as a process is aimed at improving the quality of a software system while preserving its external behavior. In practice, refactoring comes in the form of many specific and diverse refactoring operations, which have different scopes and thus a different potential impact on both the production and the test code. We present a large-scale quantitative study complemented by a qualitative analysis involving 615,196 test cases to understand how and to what extent different refactoring operations impact a system's test suites. Our findings show that while the vast majority of refactoring operations do not or very seldom induce test breaks, some specific refactoring types (e.g., “RENAME Attribute” and “RENAME Class”) have a higher chance of breaking test suites. Meanwhile, “ADD Parameter” and “CHANGE Return Type” refactoring operations often require additional lines of changes to fix the test suite they break. While some modern IDEs provide features to automatically apply these two types of refactoring operations, they are not always able to avoid test breaks, thus demanding extra human efforts.","PeriodicalId":205629,"journal":{"name":"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Does Refactoring Break Tests and to What Extent?\",\"authors\":\"Yutaro Kashiwa, Kazuki Shimizu, Bin Lin, G. Bavota, Michele Lanza, Yasutaka Kamei, Naoyasu Ubayashi\",\"doi\":\"10.26226/morressier.613b5419842293c031b5b63f\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Refactoring as a process is aimed at improving the quality of a software system while preserving its external behavior. In practice, refactoring comes in the form of many specific and diverse refactoring operations, which have different scopes and thus a different potential impact on both the production and the test code. We present a large-scale quantitative study complemented by a qualitative analysis involving 615,196 test cases to understand how and to what extent different refactoring operations impact a system's test suites. Our findings show that while the vast majority of refactoring operations do not or very seldom induce test breaks, some specific refactoring types (e.g., “RENAME Attribute” and “RENAME Class”) have a higher chance of breaking test suites. Meanwhile, “ADD Parameter” and “CHANGE Return Type” refactoring operations often require additional lines of changes to fix the test suite they break. While some modern IDEs provide features to automatically apply these two types of refactoring operations, they are not always able to avoid test breaks, thus demanding extra human efforts.\",\"PeriodicalId\":205629,\"journal\":{\"name\":\"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.26226/morressier.613b5419842293c031b5b63f\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Software Maintenance and Evolution (ICSME)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.26226/morressier.613b5419842293c031b5b63f","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Refactoring as a process is aimed at improving the quality of a software system while preserving its external behavior. In practice, refactoring comes in the form of many specific and diverse refactoring operations, which have different scopes and thus a different potential impact on both the production and the test code. We present a large-scale quantitative study complemented by a qualitative analysis involving 615,196 test cases to understand how and to what extent different refactoring operations impact a system's test suites. Our findings show that while the vast majority of refactoring operations do not or very seldom induce test breaks, some specific refactoring types (e.g., “RENAME Attribute” and “RENAME Class”) have a higher chance of breaking test suites. Meanwhile, “ADD Parameter” and “CHANGE Return Type” refactoring operations often require additional lines of changes to fix the test suite they break. While some modern IDEs provide features to automatically apply these two types of refactoring operations, they are not always able to avoid test breaks, thus demanding extra human efforts.