K. Emam, Jean-Martin Simon, Sonia Rousseau, Eric Jacquet
{"title":"软件过程评估的互解释器协议的成本含义","authors":"K. Emam, Jean-Martin Simon, Sonia Rousseau, Eric Jacquet","doi":"10.1109/METRIC.1998.731225","DOIUrl":null,"url":null,"abstract":"Much empirical research has been done on evaluating and modeling interrater agreement in software process assessments. Interrater agreement is the extent to which assessors agree in their ratings of software process capabilities when presented with the same evidence and performing their ratings independently. This line of research was based on the premise that lack of interrater agreement can lead to erroneous decisions from process assessment scores. However, thus far we do not know the impact of interrater agreement on the cost of assessments. We report on a study that evaluates the relationship between interrater agreement and the cost of the consolidation activity in assessments. The study was conducted in the context of two assessments using the emerging international standard ISO/IEC 15504. Our results indicate that for organizational processes, the relationship is strong and in the expected direction. For project level processes no relationship was found. These results indicate that for assessments that include organizational processes in their scope, ensuring high interrater agreement could lead to a reduction in their costs.","PeriodicalId":444081,"journal":{"name":"Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"20","resultStr":"{\"title\":\"Cost implications of interrater agreement for software process assessments\",\"authors\":\"K. Emam, Jean-Martin Simon, Sonia Rousseau, Eric Jacquet\",\"doi\":\"10.1109/METRIC.1998.731225\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Much empirical research has been done on evaluating and modeling interrater agreement in software process assessments. Interrater agreement is the extent to which assessors agree in their ratings of software process capabilities when presented with the same evidence and performing their ratings independently. This line of research was based on the premise that lack of interrater agreement can lead to erroneous decisions from process assessment scores. However, thus far we do not know the impact of interrater agreement on the cost of assessments. We report on a study that evaluates the relationship between interrater agreement and the cost of the consolidation activity in assessments. The study was conducted in the context of two assessments using the emerging international standard ISO/IEC 15504. Our results indicate that for organizational processes, the relationship is strong and in the expected direction. For project level processes no relationship was found. These results indicate that for assessments that include organizational processes in their scope, ensuring high interrater agreement could lead to a reduction in their costs.\",\"PeriodicalId\":444081,\"journal\":{\"name\":\"Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262)\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-03-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"20\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/METRIC.1998.731225\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Fifth International Software Metrics Symposium. Metrics (Cat. No.98TB100262)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/METRIC.1998.731225","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cost implications of interrater agreement for software process assessments
Much empirical research has been done on evaluating and modeling interrater agreement in software process assessments. Interrater agreement is the extent to which assessors agree in their ratings of software process capabilities when presented with the same evidence and performing their ratings independently. This line of research was based on the premise that lack of interrater agreement can lead to erroneous decisions from process assessment scores. However, thus far we do not know the impact of interrater agreement on the cost of assessments. We report on a study that evaluates the relationship between interrater agreement and the cost of the consolidation activity in assessments. The study was conducted in the context of two assessments using the emerging international standard ISO/IEC 15504. Our results indicate that for organizational processes, the relationship is strong and in the expected direction. For project level processes no relationship was found. These results indicate that for assessments that include organizational processes in their scope, ensuring high interrater agreement could lead to a reduction in their costs.