{"title":"关于工作量估算模型的评价","authors":"L. Lavazza, S. Morasca","doi":"10.1145/3084226.3084260","DOIUrl":null,"url":null,"abstract":"Background. Using accurate effort estimation models can help software companies plan, monitor, and control their development process and development costs. It is therefore important to define sound accuracy indicators that allow practitioners and researchers to assess and rank different effort estimation models so that practitioners can select the most accurate, and therefore useful one. Several accuracy indicators exist, with different advantages and disadvantages. Objective. We propose a general framework for building sound accuracy indicators for effort estimation models. Method. The accuracy indicators that comply with our proposal are built by means of a comparison between a reference effort estimation model and the specific model whose accuracy we would like to assess. Several existing indicators are built this way: we develop a framework so new indicators can be defined in a sound way. Results. From a theoretical point of view, we applied our approach to accuracy indicators based on the square of the residuals and the absolute value of the residuals. We show that using a random model as a reference model, as done in some recent literature, sets too low a bar in terms of what may be acceptable. Instead, we use reference models that are built based on constant functions. From a practical point of view, we applied our approach to datasets containing measures of industrial software development projects. With the proposed method we were able to derive indications both according to criteria already proposed in the literature and according to new criteria. Conclusions. Our method can be used to define sound accuracy indicators for effort estimation models.","PeriodicalId":192290,"journal":{"name":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"On the Evaluation of Effort Estimation Models\",\"authors\":\"L. Lavazza, S. Morasca\",\"doi\":\"10.1145/3084226.3084260\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Background. Using accurate effort estimation models can help software companies plan, monitor, and control their development process and development costs. It is therefore important to define sound accuracy indicators that allow practitioners and researchers to assess and rank different effort estimation models so that practitioners can select the most accurate, and therefore useful one. Several accuracy indicators exist, with different advantages and disadvantages. Objective. We propose a general framework for building sound accuracy indicators for effort estimation models. Method. The accuracy indicators that comply with our proposal are built by means of a comparison between a reference effort estimation model and the specific model whose accuracy we would like to assess. Several existing indicators are built this way: we develop a framework so new indicators can be defined in a sound way. Results. From a theoretical point of view, we applied our approach to accuracy indicators based on the square of the residuals and the absolute value of the residuals. We show that using a random model as a reference model, as done in some recent literature, sets too low a bar in terms of what may be acceptable. Instead, we use reference models that are built based on constant functions. From a practical point of view, we applied our approach to datasets containing measures of industrial software development projects. With the proposed method we were able to derive indications both according to criteria already proposed in the literature and according to new criteria. Conclusions. Our method can be used to define sound accuracy indicators for effort estimation models.\",\"PeriodicalId\":192290,\"journal\":{\"name\":\"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering\",\"volume\":\"33 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3084226.3084260\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 21st International Conference on Evaluation and Assessment in Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3084226.3084260","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Background. Using accurate effort estimation models can help software companies plan, monitor, and control their development process and development costs. It is therefore important to define sound accuracy indicators that allow practitioners and researchers to assess and rank different effort estimation models so that practitioners can select the most accurate, and therefore useful one. Several accuracy indicators exist, with different advantages and disadvantages. Objective. We propose a general framework for building sound accuracy indicators for effort estimation models. Method. The accuracy indicators that comply with our proposal are built by means of a comparison between a reference effort estimation model and the specific model whose accuracy we would like to assess. Several existing indicators are built this way: we develop a framework so new indicators can be defined in a sound way. Results. From a theoretical point of view, we applied our approach to accuracy indicators based on the square of the residuals and the absolute value of the residuals. We show that using a random model as a reference model, as done in some recent literature, sets too low a bar in terms of what may be acceptable. Instead, we use reference models that are built based on constant functions. From a practical point of view, we applied our approach to datasets containing measures of industrial software development projects. With the proposed method we were able to derive indications both according to criteria already proposed in the literature and according to new criteria. Conclusions. Our method can be used to define sound accuracy indicators for effort estimation models.