A. Vincenzi, T. Bachiega, Daniel G. de Oliveira, S. Souza, J. Maldonado
{"title":"自动和手动生成的测试用例集的互补方面","authors":"A. Vincenzi, T. Bachiega, Daniel G. de Oliveira, S. Souza, J. Maldonado","doi":"10.1145/2994291.2994295","DOIUrl":null,"url":null,"abstract":"The test is a mandatory activity for software quality assurance. The knowledge about the software under testing is necessary to generate high-quality test cases, but to execute more than 80% of its source code is not an easy task, and demands an in-depth knowledge of the business rules it implements. In this article, we investigate the adequacy, effectiveness, and cost of manually generated test sets versus automatically generated test sets for Java programs. We observed that, in general, manual test sets determine higher statement coverage and mutation score than automatically generated test sets. But one interesting aspect recognized is that the automatically generated test sets are complementary to the manual test set. When we combined manual with automated test sets, the resultant test sets overcame in more that 10%, on average, statement coverage and mutation score when compared to the rates of manual test set, keeping a reasonable cost. Therefore, we advocate that we should concentrate the use of manually generated test sets on testing essential and critical parts of the software.","PeriodicalId":255079,"journal":{"name":"Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"12","resultStr":"{\"title\":\"The complementary aspect of automatically and manually generated test case sets\",\"authors\":\"A. Vincenzi, T. Bachiega, Daniel G. de Oliveira, S. Souza, J. Maldonado\",\"doi\":\"10.1145/2994291.2994295\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The test is a mandatory activity for software quality assurance. The knowledge about the software under testing is necessary to generate high-quality test cases, but to execute more than 80% of its source code is not an easy task, and demands an in-depth knowledge of the business rules it implements. In this article, we investigate the adequacy, effectiveness, and cost of manually generated test sets versus automatically generated test sets for Java programs. We observed that, in general, manual test sets determine higher statement coverage and mutation score than automatically generated test sets. But one interesting aspect recognized is that the automatically generated test sets are complementary to the manual test set. When we combined manual with automated test sets, the resultant test sets overcame in more that 10%, on average, statement coverage and mutation score when compared to the rates of manual test set, keeping a reasonable cost. Therefore, we advocate that we should concentrate the use of manually generated test sets on testing essential and critical parts of the software.\",\"PeriodicalId\":255079,\"journal\":{\"name\":\"Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-11-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"12\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2994291.2994295\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th International Workshop on Automating Test Case Design, Selection, and Evaluation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2994291.2994295","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The complementary aspect of automatically and manually generated test case sets
The test is a mandatory activity for software quality assurance. The knowledge about the software under testing is necessary to generate high-quality test cases, but to execute more than 80% of its source code is not an easy task, and demands an in-depth knowledge of the business rules it implements. In this article, we investigate the adequacy, effectiveness, and cost of manually generated test sets versus automatically generated test sets for Java programs. We observed that, in general, manual test sets determine higher statement coverage and mutation score than automatically generated test sets. But one interesting aspect recognized is that the automatically generated test sets are complementary to the manual test set. When we combined manual with automated test sets, the resultant test sets overcame in more that 10%, on average, statement coverage and mutation score when compared to the rates of manual test set, keeping a reasonable cost. Therefore, we advocate that we should concentrate the use of manually generated test sets on testing essential and critical parts of the software.