Muhammad Dhiauddin Mohamed Suffian, Loo Fook Ann, Farah Farhana Mohd Nazri, F. R. Fahrurazi, Shi-Tzuaan Soo, Mohamed Redzuan Abdullah
{"title":"How Good is My Software? A Simple Approach for Software Rating based on System Testing Results: A Case Study via Test-in-the Cloud Platform","authors":"Muhammad Dhiauddin Mohamed Suffian, Loo Fook Ann, Farah Farhana Mohd Nazri, F. R. Fahrurazi, Shi-Tzuaan Soo, Mohamed Redzuan Abdullah","doi":"10.1109/ICCSCE47578.2019.9068577","DOIUrl":null,"url":null,"abstract":"Knowing how good your software is prior to release could indicate whether the software can really work in the actual environment. Executing the system test allows for this to take place. By applying simple analytics approach to the system test cases results of PASS or FAIL for each test strategy imposed, points can be assigned per test case for every test iteration. Then, scores can be calculated. This shall be done to every test tool used per test strategy. The average of accumulated scores from all test strategies is mapped to the predefined rating table to establish software product rating. The proposed approach can be used for complete system testing or ongoing system testing, which serves as early indicator for the software's expected behavior in the actual environment.","PeriodicalId":221890,"journal":{"name":"2019 9th IEEE International Conference on Control System, Computing and Engineering (ICCSCE)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 9th IEEE International Conference on Control System, Computing and Engineering (ICCSCE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSCE47578.2019.9068577","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Knowing how good your software is prior to release could indicate whether the software can really work in the actual environment. Executing the system test allows for this to take place. By applying simple analytics approach to the system test cases results of PASS or FAIL for each test strategy imposed, points can be assigned per test case for every test iteration. Then, scores can be calculated. This shall be done to every test tool used per test strategy. The average of accumulated scores from all test strategies is mapped to the predefined rating table to establish software product rating. The proposed approach can be used for complete system testing or ongoing system testing, which serves as early indicator for the software's expected behavior in the actual environment.