{"title":"An empirical validation of the relationship between the magnitude of relative error and project size","authors":"E. Stensrud, T. Foss, B. Kitchenham, I. Myrtveit","doi":"10.1109/METRIC.2002.1011320","DOIUrl":null,"url":null,"abstract":"Cost estimates are important deliverables of a software project. Consequently, a number of cost prediction models have been proposed and evaluated. The common evaluation criteria have been MMRE, MdMRE and PRED(k). MRE is the basic metric in these evaluation criteria. The implicit rationale of using a relative error measure like MRE, rather than an absolute one, is presumably to have a measure that is independent of project size. We investigate if this implicit claim holds true for several data sets: Albrecht, Kemerer, Finnish, DMR and Accenture-ERP. The results suggest that MRE is not independent of project size. Rather, MRE is larger for small projects than for large projects. A practical consequence is that a project manager predicting a small project may falsely believe in a too low MRE. Vice versa when predicting a large project. For researchers, it is important to know that MMRE is not an appropriate measure of the expected MRE of small and large projects. We recommend therefore that the data set be partitioned into two or more subsamples and that MMRE is reported per subsample. In the long term, we should consider using other evaluation criteria.","PeriodicalId":165815,"journal":{"name":"Proceedings Eighth IEEE Symposium on Software Metrics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"80","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Eighth IEEE Symposium on Software Metrics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/METRIC.2002.1011320","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 80
Abstract
Cost estimates are important deliverables of a software project. Consequently, a number of cost prediction models have been proposed and evaluated. The common evaluation criteria have been MMRE, MdMRE and PRED(k). MRE is the basic metric in these evaluation criteria. The implicit rationale of using a relative error measure like MRE, rather than an absolute one, is presumably to have a measure that is independent of project size. We investigate if this implicit claim holds true for several data sets: Albrecht, Kemerer, Finnish, DMR and Accenture-ERP. The results suggest that MRE is not independent of project size. Rather, MRE is larger for small projects than for large projects. A practical consequence is that a project manager predicting a small project may falsely believe in a too low MRE. Vice versa when predicting a large project. For researchers, it is important to know that MMRE is not an appropriate measure of the expected MRE of small and large projects. We recommend therefore that the data set be partitioned into two or more subsamples and that MMRE is reported per subsample. In the long term, we should consider using other evaluation criteria.