{"title":"Learning to Predict Software Testability","authors":"Morteza Zakeri Nasrabadi, S. Parsa","doi":"10.1109/CSICC52343.2021.9420548","DOIUrl":null,"url":null,"abstract":"Software testability is the propensity of code to reveal its existing faults, particularly during automated testing. Testing success depends on the testability of the program under test. On the other hand, testing success relies on the coverage of the test data provided by a given test data generation algorithm. However, little empirical evidence has been shown to clarify whether and how software testability affects test coverage. In this article, we propose a method to shed light on this subject. Our proposed framework uses the coverage of Software Under Test (SUT), provided by different automatically generated test suites, to build machine learning models, determining the testability of programs based on many source code metrics. The resultant models can predict the code coverage provided by a given test data generation algorithm before running the algorithm, reducing the cost of additional testing. The predicted coverage is used as a concrete proxy to quantify source code testability. Experiments show an acceptable accuracy of 81.94% in measuring and predicting software testability.","PeriodicalId":374593,"journal":{"name":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 26th International Computer Conference, Computer Society of Iran (CSICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSICC52343.2021.9420548","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
Software testability is the propensity of code to reveal its existing faults, particularly during automated testing. Testing success depends on the testability of the program under test. On the other hand, testing success relies on the coverage of the test data provided by a given test data generation algorithm. However, little empirical evidence has been shown to clarify whether and how software testability affects test coverage. In this article, we propose a method to shed light on this subject. Our proposed framework uses the coverage of Software Under Test (SUT), provided by different automatically generated test suites, to build machine learning models, determining the testability of programs based on many source code metrics. The resultant models can predict the code coverage provided by a given test data generation algorithm before running the algorithm, reducing the cost of additional testing. The predicted coverage is used as a concrete proxy to quantify source code testability. Experiments show an acceptable accuracy of 81.94% in measuring and predicting software testability.