开放式问题手工编码中编码错误的模型辅助发现方法

IF 1.6 4区 数学 Q2 SOCIAL SCIENCES, MATHEMATICAL METHODS
Zhoushanyue He, Matthias Schonlau
{"title":"开放式问题手工编码中编码错误的模型辅助发现方法","authors":"Zhoushanyue He, Matthias Schonlau","doi":"10.1093/jssam/smab022","DOIUrl":null,"url":null,"abstract":"\n Text answers to open-ended questions are typically manually coded into one of several codes. Usually, a random subset of text answers is double-coded to assess intercoder reliability, but most of the data remain single-coded. Any disagreement between the two coders points to an error by one of the coders. When the budget allows double coding additional text answers, we propose employing statistical learning models to predict which single-coded answers have a high risk of a coding error. Specifically, we train a model on the double-coded random subset and predict the probability that the single-coded codes are correct. Then, text answers with the highest risk are double-coded to verify. In experiments with three data sets, we found that this method identifies two to three times as many coding errors in the additional text answers as compared to random guessing, on average. We conclude that this method is preferred if the budget permits additional double-coding. When there are a lot of intercoder disagreements, the benefit can be substantial.","PeriodicalId":17146,"journal":{"name":"Journal of Survey Statistics and Methodology","volume":" ","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2021-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"A Model-Assisted Approach for Finding Coding Errors in Manual Coding of Open-Ended Questions\",\"authors\":\"Zhoushanyue He, Matthias Schonlau\",\"doi\":\"10.1093/jssam/smab022\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Text answers to open-ended questions are typically manually coded into one of several codes. Usually, a random subset of text answers is double-coded to assess intercoder reliability, but most of the data remain single-coded. Any disagreement between the two coders points to an error by one of the coders. When the budget allows double coding additional text answers, we propose employing statistical learning models to predict which single-coded answers have a high risk of a coding error. Specifically, we train a model on the double-coded random subset and predict the probability that the single-coded codes are correct. Then, text answers with the highest risk are double-coded to verify. In experiments with three data sets, we found that this method identifies two to three times as many coding errors in the additional text answers as compared to random guessing, on average. We conclude that this method is preferred if the budget permits additional double-coding. When there are a lot of intercoder disagreements, the benefit can be substantial.\",\"PeriodicalId\":17146,\"journal\":{\"name\":\"Journal of Survey Statistics and Methodology\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2021-08-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Survey Statistics and Methodology\",\"FirstCategoryId\":\"100\",\"ListUrlMain\":\"https://doi.org/10.1093/jssam/smab022\",\"RegionNum\":4,\"RegionCategory\":\"数学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"SOCIAL SCIENCES, MATHEMATICAL METHODS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Survey Statistics and Methodology","FirstCategoryId":"100","ListUrlMain":"https://doi.org/10.1093/jssam/smab022","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"SOCIAL SCIENCES, MATHEMATICAL METHODS","Score":null,"Total":0}
引用次数: 4

摘要

开放式问题的文本答案通常被手动编码为几个代码中的一个。通常,文本答案的随机子集是双重编码的,以评估代码间的可靠性,但大多数数据仍然是单一编码的。两个编码器之间的任何分歧都指向其中一个编码器的错误。当预算允许对额外的文本答案进行双重编码时,我们建议使用统计学习模型来预测哪些单一编码的答案具有较高的编码错误风险。具体来说,我们在双编码随机子集上训练一个模型,并预测单编码码正确的概率。然后,对风险最高的文本答案进行双重编码以进行验证。在使用三个数据集的实验中,我们发现,与随机猜测相比,这种方法在额外的文本答案中识别的编码错误平均是随机猜测的两到三倍。我们得出的结论是,如果预算允许额外的双重编码,这种方法是首选的。当有很多跨部门的分歧时,利益可能是巨大的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Model-Assisted Approach for Finding Coding Errors in Manual Coding of Open-Ended Questions
Text answers to open-ended questions are typically manually coded into one of several codes. Usually, a random subset of text answers is double-coded to assess intercoder reliability, but most of the data remain single-coded. Any disagreement between the two coders points to an error by one of the coders. When the budget allows double coding additional text answers, we propose employing statistical learning models to predict which single-coded answers have a high risk of a coding error. Specifically, we train a model on the double-coded random subset and predict the probability that the single-coded codes are correct. Then, text answers with the highest risk are double-coded to verify. In experiments with three data sets, we found that this method identifies two to three times as many coding errors in the additional text answers as compared to random guessing, on average. We conclude that this method is preferred if the budget permits additional double-coding. When there are a lot of intercoder disagreements, the benefit can be substantial.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.30
自引率
9.50%
发文量
40
期刊介绍: The Journal of Survey Statistics and Methodology, sponsored by AAPOR and the American Statistical Association, began publishing in 2013. Its objective is to publish cutting edge scholarly articles on statistical and methodological issues for sample surveys, censuses, administrative record systems, and other related data. It aims to be the flagship journal for research on survey statistics and methodology. Topics of interest include survey sample design, statistical inference, nonresponse, measurement error, the effects of modes of data collection, paradata and responsive survey design, combining data from multiple sources, record linkage, disclosure limitation, and other issues in survey statistics and methodology. The journal publishes both theoretical and applied papers, provided the theory is motivated by an important applied problem and the applied papers report on research that contributes generalizable knowledge to the field. Review papers are also welcomed. Papers on a broad range of surveys are encouraged, including (but not limited to) surveys concerning business, economics, marketing research, social science, environment, epidemiology, biostatistics and official statistics. The journal has three sections. The Survey Statistics section presents papers on innovative sampling procedures, imputation, weighting, measures of uncertainty, small area inference, new methods of analysis, and other statistical issues related to surveys. The Survey Methodology section presents papers that focus on methodological research, including methodological experiments, methods of data collection and use of paradata. The Applications section contains papers involving innovative applications of methods and providing practical contributions and guidance, and/or significant new findings.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信