{"title":"基于分类量表的观察者间一致性的测量","authors":"M.A.A. Moussa","doi":"10.1016/0010-468X(85)90014-5","DOIUrl":null,"url":null,"abstract":"<div><p>The Kappa statistic is used to measure the interobserver similarity based on categorical scales. The cases of two or more observers with two or more rating categories are considered. Allowance is made for the attachment of disagreement weights, based on rational or clinical grounds, to different rating categories. Tests of hypotheses about the conditions Kappa = 0 and Kappa > 0 are conducted.</p></div>","PeriodicalId":75731,"journal":{"name":"Computer programs in biomedicine","volume":"19 2","pages":"Pages 221-228"},"PeriodicalIF":0.0000,"publicationDate":"1985-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/0010-468X(85)90014-5","citationCount":"3","resultStr":"{\"title\":\"The measurement of interobserver agreement based on categorical scales\",\"authors\":\"M.A.A. Moussa\",\"doi\":\"10.1016/0010-468X(85)90014-5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The Kappa statistic is used to measure the interobserver similarity based on categorical scales. The cases of two or more observers with two or more rating categories are considered. Allowance is made for the attachment of disagreement weights, based on rational or clinical grounds, to different rating categories. Tests of hypotheses about the conditions Kappa = 0 and Kappa > 0 are conducted.</p></div>\",\"PeriodicalId\":75731,\"journal\":{\"name\":\"Computer programs in biomedicine\",\"volume\":\"19 2\",\"pages\":\"Pages 221-228\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1985-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1016/0010-468X(85)90014-5\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computer programs in biomedicine\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/0010468X85900145\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer programs in biomedicine","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/0010468X85900145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The measurement of interobserver agreement based on categorical scales
The Kappa statistic is used to measure the interobserver similarity based on categorical scales. The cases of two or more observers with two or more rating categories are considered. Allowance is made for the attachment of disagreement weights, based on rational or clinical grounds, to different rating categories. Tests of hypotheses about the conditions Kappa = 0 and Kappa > 0 are conducted.