Putting the Kappa Statistic to Use

T. R. Nichols, Paola M Wisner, Gary Cripe, Lakshmi Gulabchand
{"title":"Putting the Kappa Statistic to Use","authors":"T. R. Nichols, Paola M Wisner, Gary Cripe, Lakshmi Gulabchand","doi":"10.1002/QAJ.481","DOIUrl":null,"url":null,"abstract":"Inter-rater assessments of agreement are an essential criterion in the subjective evaluation of product quality. When assessments among raters demonstrate evidence of a lack of agreement (partial or total), there is a need to identify the source of disagreement. The objective being the reduction or mitigation of the influence different raters have on the assessment and the achievement of consistency among raters. The less influence that raters have on the assessment, the more confident one is in making critical to quality decisions. However, situations do exist in which user perceptions can be unreliable (not repeatable) and demonstrate poor correlation with engineered specifications. Quality management teams must be aware of this. When such situations exist, it is advisable to revisit the voice of the process as a reliable function of specification. Copyright © 2011 John Wiley & Sons, Ltd.","PeriodicalId":147931,"journal":{"name":"Quality Assurance Journal","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"74","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Quality Assurance Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1002/QAJ.481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 74

Abstract

Inter-rater assessments of agreement are an essential criterion in the subjective evaluation of product quality. When assessments among raters demonstrate evidence of a lack of agreement (partial or total), there is a need to identify the source of disagreement. The objective being the reduction or mitigation of the influence different raters have on the assessment and the achievement of consistency among raters. The less influence that raters have on the assessment, the more confident one is in making critical to quality decisions. However, situations do exist in which user perceptions can be unreliable (not repeatable) and demonstrate poor correlation with engineered specifications. Quality management teams must be aware of this. When such situations exist, it is advisable to revisit the voice of the process as a reliable function of specification. Copyright © 2011 John Wiley & Sons, Ltd.
运用Kappa统计数据
评价者之间的一致性评价是产品质量主观评价的重要标准。当评分者之间的评估显示出缺乏一致(部分或全部)的证据时,需要确定分歧的来源。目标是减少或减轻不同评价者对评估的影响,并实现评价者之间的一致性。评分者对评估的影响越小,就越有信心做出对质量至关重要的决定。然而,确实存在这样的情况:用户感知可能不可靠(不可重复),并且与工程规范表现出较差的相关性。质量管理团队必须意识到这一点。当这种情况存在时,建议重新审视过程的声音,将其作为规范的可靠功能。版权所有©2011 John Wiley & Sons, Ltd
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信