{"title":"识别学习者语料库中的错误——错误定位与错误描述的两个阶段以及测量和报告注释者间一致性的后果","authors":"Nikola Dobrić","doi":"10.1016/j.acorp.2022.100039","DOIUrl":null,"url":null,"abstract":"<div><p>Marking errors in L2 learner performance, though useful in both a didactic and academic sense, is a challenging process, one usually performed manually when involving learner corpora. This is because errors are largely latent phenomena whose manual identification and description involve a significant degree of judgment on the side of human annotators. The purpose of the paper is to discuss and demonstrate the implications of the two stages of the decision-making process that is manual error coding, <em>error location</em> and <em>error description</em>, for measuring inter-annotator agreement as a marker of quality of annotation. The crux of the study is in the proposal that inter-annotator agreement on error location and on error description should be considered and reported separately rather than, as is common, together as a single measurement. The case study, grounded in a high-stakes exam context and typified using an established error taxonomy, demonstrates the method behind the proposal and showcases its usefulness in real-world settings.</p></div>","PeriodicalId":72254,"journal":{"name":"Applied Corpus Linguistics","volume":"3 1","pages":"Article 100039"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Identifying errors in a learner corpus – the two stages of error location vs. error description and consequences for measuring and reporting inter-annotator agreement\",\"authors\":\"Nikola Dobrić\",\"doi\":\"10.1016/j.acorp.2022.100039\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Marking errors in L2 learner performance, though useful in both a didactic and academic sense, is a challenging process, one usually performed manually when involving learner corpora. This is because errors are largely latent phenomena whose manual identification and description involve a significant degree of judgment on the side of human annotators. The purpose of the paper is to discuss and demonstrate the implications of the two stages of the decision-making process that is manual error coding, <em>error location</em> and <em>error description</em>, for measuring inter-annotator agreement as a marker of quality of annotation. The crux of the study is in the proposal that inter-annotator agreement on error location and on error description should be considered and reported separately rather than, as is common, together as a single measurement. The case study, grounded in a high-stakes exam context and typified using an established error taxonomy, demonstrates the method behind the proposal and showcases its usefulness in real-world settings.</p></div>\",\"PeriodicalId\":72254,\"journal\":{\"name\":\"Applied Corpus Linguistics\",\"volume\":\"3 1\",\"pages\":\"Article 100039\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Applied Corpus Linguistics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2666799122000235\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Corpus Linguistics","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666799122000235","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Identifying errors in a learner corpus – the two stages of error location vs. error description and consequences for measuring and reporting inter-annotator agreement
Marking errors in L2 learner performance, though useful in both a didactic and academic sense, is a challenging process, one usually performed manually when involving learner corpora. This is because errors are largely latent phenomena whose manual identification and description involve a significant degree of judgment on the side of human annotators. The purpose of the paper is to discuss and demonstrate the implications of the two stages of the decision-making process that is manual error coding, error location and error description, for measuring inter-annotator agreement as a marker of quality of annotation. The crux of the study is in the proposal that inter-annotator agreement on error location and on error description should be considered and reported separately rather than, as is common, together as a single measurement. The case study, grounded in a high-stakes exam context and typified using an established error taxonomy, demonstrates the method behind the proposal and showcases its usefulness in real-world settings.