{"title":"科学软件中手动和自动测试用例生成技术质量的比较评估——以材料科学工作流的Python项目为例","authors":"Daniel Trübenbach, Sebastian Müller, L. Grunske","doi":"10.1145/3526072.3527523","DOIUrl":null,"url":null,"abstract":"Writing software tests is essential to ensure a high quality of the software project under test. However, writing tests manually is time consuming and expensive. Especially in research fields of the natural sciences, scientists do not have a formal education in software engineering. Thus, automatic test case generation is particularly promising to help build good test suites. In this case study, we investigate the efficacy of automated test case generation approaches for the Python project Atomic Simulation Environment (ASE) used in the material sciences. We compare the branch and mutation coverages reached by both the automatic approaches, as well as a manually created test suite. Finally, we statistically evaluate the measured coverages by each approach against those reached by any of the other approaches. We find that while all evaluated approaches are able to improve upon the original test suite of ASE, none of the automated test case generation algorithms manage to come close to the coverages reached by the manually created test suite. We hypothesize this may be due to the fact that none of the employed test case generation approaches were developed to work on complex structured inputs. Thus, we conclude that more work may be needed if automated test case generation is used on software that requires this type of input.","PeriodicalId":206275,"journal":{"name":"2022 IEEE/ACM 15th International Workshop on Search-Based Software Testing (SBST)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A Comparative Evaluation on the Quality of Manual and Automatic Test Case Generation Techniques for Scientific Software - a Case Study of a Python Project for Material Science Workflows\",\"authors\":\"Daniel Trübenbach, Sebastian Müller, L. Grunske\",\"doi\":\"10.1145/3526072.3527523\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Writing software tests is essential to ensure a high quality of the software project under test. However, writing tests manually is time consuming and expensive. Especially in research fields of the natural sciences, scientists do not have a formal education in software engineering. Thus, automatic test case generation is particularly promising to help build good test suites. In this case study, we investigate the efficacy of automated test case generation approaches for the Python project Atomic Simulation Environment (ASE) used in the material sciences. We compare the branch and mutation coverages reached by both the automatic approaches, as well as a manually created test suite. Finally, we statistically evaluate the measured coverages by each approach against those reached by any of the other approaches. We find that while all evaluated approaches are able to improve upon the original test suite of ASE, none of the automated test case generation algorithms manage to come close to the coverages reached by the manually created test suite. We hypothesize this may be due to the fact that none of the employed test case generation approaches were developed to work on complex structured inputs. Thus, we conclude that more work may be needed if automated test case generation is used on software that requires this type of input.\",\"PeriodicalId\":206275,\"journal\":{\"name\":\"2022 IEEE/ACM 15th International Workshop on Search-Based Software Testing (SBST)\",\"volume\":\"42 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE/ACM 15th International Workshop on Search-Based Software Testing (SBST)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3526072.3527523\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE/ACM 15th International Workshop on Search-Based Software Testing (SBST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3526072.3527523","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Comparative Evaluation on the Quality of Manual and Automatic Test Case Generation Techniques for Scientific Software - a Case Study of a Python Project for Material Science Workflows
Writing software tests is essential to ensure a high quality of the software project under test. However, writing tests manually is time consuming and expensive. Especially in research fields of the natural sciences, scientists do not have a formal education in software engineering. Thus, automatic test case generation is particularly promising to help build good test suites. In this case study, we investigate the efficacy of automated test case generation approaches for the Python project Atomic Simulation Environment (ASE) used in the material sciences. We compare the branch and mutation coverages reached by both the automatic approaches, as well as a manually created test suite. Finally, we statistically evaluate the measured coverages by each approach against those reached by any of the other approaches. We find that while all evaluated approaches are able to improve upon the original test suite of ASE, none of the automated test case generation algorithms manage to come close to the coverages reached by the manually created test suite. We hypothesize this may be due to the fact that none of the employed test case generation approaches were developed to work on complex structured inputs. Thus, we conclude that more work may be needed if automated test case generation is used on software that requires this type of input.