{"title":"确定软件可靠性模型适用性的实验","authors":"A. Nikora, Michael R. Lyu","doi":"10.1109/ISSRE.1995.497671","DOIUrl":null,"url":null,"abstract":"Most reported experience with software reliability models is from a project's testing phases, during which researchers have little control over the failure data. Since failure data can be noisy and distorted, reported procedures for determining model applicability may be incomplete. To gain additional insight into this problem, we generated forty sets of data by drawing samples from two distributions, which were used as inputs to six different software reliability models. We used several different methods to analyze the applicability of the models. We expected that a model would perform best on the data sets created to comply with the model's assumptions, but initially found that this was not always the case. More detailed examination showed that a model using a data set created to satisfy its assumptions tended to have better prequential likelihood bias, and bias trend measures, although the Kolmogorov-Smirnov test might not be a reliable indicator of the best model. These results indicate that more than one measure should be used to determine model applicability, and that for greater accuracy they be evaluated in sequence rather than simultaneously.","PeriodicalId":408394,"journal":{"name":"Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE'95","volume":"11 18 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1995-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":"{\"title\":\"An experiment in determining software reliability model applicability\",\"authors\":\"A. Nikora, Michael R. Lyu\",\"doi\":\"10.1109/ISSRE.1995.497671\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Most reported experience with software reliability models is from a project's testing phases, during which researchers have little control over the failure data. Since failure data can be noisy and distorted, reported procedures for determining model applicability may be incomplete. To gain additional insight into this problem, we generated forty sets of data by drawing samples from two distributions, which were used as inputs to six different software reliability models. We used several different methods to analyze the applicability of the models. We expected that a model would perform best on the data sets created to comply with the model's assumptions, but initially found that this was not always the case. More detailed examination showed that a model using a data set created to satisfy its assumptions tended to have better prequential likelihood bias, and bias trend measures, although the Kolmogorov-Smirnov test might not be a reliable indicator of the best model. These results indicate that more than one measure should be used to determine model applicability, and that for greater accuracy they be evaluated in sequence rather than simultaneously.\",\"PeriodicalId\":408394,\"journal\":{\"name\":\"Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE'95\",\"volume\":\"11 18 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1995-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"23\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE'95\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISSRE.1995.497671\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of Sixth International Symposium on Software Reliability Engineering. ISSRE'95","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISSRE.1995.497671","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An experiment in determining software reliability model applicability
Most reported experience with software reliability models is from a project's testing phases, during which researchers have little control over the failure data. Since failure data can be noisy and distorted, reported procedures for determining model applicability may be incomplete. To gain additional insight into this problem, we generated forty sets of data by drawing samples from two distributions, which were used as inputs to six different software reliability models. We used several different methods to analyze the applicability of the models. We expected that a model would perform best on the data sets created to comply with the model's assumptions, but initially found that this was not always the case. More detailed examination showed that a model using a data set created to satisfy its assumptions tended to have better prequential likelihood bias, and bias trend measures, although the Kolmogorov-Smirnov test might not be a reliable indicator of the best model. These results indicate that more than one measure should be used to determine model applicability, and that for greater accuracy they be evaluated in sequence rather than simultaneously.