Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss' kappa.

IF 3.9 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Filip Moons, Ellen Vandervieren
{"title":"Measuring agreement among several raters classifying subjects into one or more (hierarchical) categories: A generalization of Fleiss' kappa.","authors":"Filip Moons, Ellen Vandervieren","doi":"10.3758/s13428-025-02746-8","DOIUrl":null,"url":null,"abstract":"<p><p>Cohen's and Fleiss' kappa are well-known measures of inter-rater agreement, but they restrict each rater to selecting only one category per subject. This limitation is consequential in contexts where subjects may belong to multiple categories, such as psychiatric diagnoses involving multiple disorders or classifying interview snippets into multiple codes of a codebook. We propose a generalized version of Fleiss' kappa, which accommodates multiple raters assigning subjects to one or more nominal categories. Our proposed <math><mi>κ</mi></math> statistic can incorporate category weights based on their importance and account for hierarchical category structures, such as primary disorders with sub-disorders. The new <math><mi>κ</mi></math> statistic can also manage missing data and variations in the number of raters per subject or category. We review existing methods that allow for multiple category assignments and detail the derivation of our measure, proving its equivalence to Fleiss' kappa when raters select a single category per subject. The paper discusses the assumptions, premises, and potential paradoxes of the new measure, as well as the range of possible values and guidelines for interpretation. The measure was developed to investigate the reliability of a new mathematics assessment method, of which an example is elaborated. The paper concludes with a worked-out example of psychiatrists diagnosing patients with multiple disorders. All calculations are provided as R script and an Excel sheet to facilitate access to the new <math><mi>κ</mi></math> statistic.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 10","pages":"287"},"PeriodicalIF":3.9000,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12436533/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-025-02746-8","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Cohen's and Fleiss' kappa are well-known measures of inter-rater agreement, but they restrict each rater to selecting only one category per subject. This limitation is consequential in contexts where subjects may belong to multiple categories, such as psychiatric diagnoses involving multiple disorders or classifying interview snippets into multiple codes of a codebook. We propose a generalized version of Fleiss' kappa, which accommodates multiple raters assigning subjects to one or more nominal categories. Our proposed κ statistic can incorporate category weights based on their importance and account for hierarchical category structures, such as primary disorders with sub-disorders. The new κ statistic can also manage missing data and variations in the number of raters per subject or category. We review existing methods that allow for multiple category assignments and detail the derivation of our measure, proving its equivalence to Fleiss' kappa when raters select a single category per subject. The paper discusses the assumptions, premises, and potential paradoxes of the new measure, as well as the range of possible values and guidelines for interpretation. The measure was developed to investigate the reliability of a new mathematics assessment method, of which an example is elaborated. The paper concludes with a worked-out example of psychiatrists diagnosing patients with multiple disorders. All calculations are provided as R script and an Excel sheet to facilitate access to the new κ statistic.

Abstract Image

Abstract Image

衡量几个将被试分为一个或多个(等级)类别的评价者之间的一致性:Fleiss kappa的概括。
科恩和弗莱斯的kappa是衡量评分者之间一致性的著名方法,但它们限制每个评分者在每个科目中只能选择一个类别。这种限制在受试者可能属于多个类别的情况下是重要的,例如涉及多种疾病的精神病学诊断或将访谈片段分类为密码本的多个代码。我们提出了Fleiss kappa的一个广义版本,它可以容纳多个评分者,将受试者分配到一个或多个名义类别。我们提出的κ统计量可以根据其重要性合并类别权重,并考虑分层类别结构,如原发性疾病与亚疾病。新的κ统计还可以管理缺失的数据和每个主题或类别的评分者数量的变化。我们回顾了允许多个类别分配的现有方法,并详细说明了我们的度量的推导,证明了当评分者选择每个主题的单一类别时,它与Fleiss的kappa等效。本文讨论了新措施的假设、前提和潜在的悖论,以及可能的值和解释指南的范围。该方法是为了研究一种新的数学评价方法的可靠性而开发的,并给出了实例。这篇论文以一个精神科医生诊断患有多种疾病的病人的例子作结。所有计算都以R脚本和Excel表格的形式提供,以便于访问新的κ统计量。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
10.30
自引率
9.30%
发文量
266
期刊介绍: Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信