{"title":"Towards increased reliability and transparency in projects with manual linguistic coding","authors":"Nicole Hober, Tülay Dixon, Tove Larsson","doi":"10.3366/cor.2023.0284","DOIUrl":null,"url":null,"abstract":"Manually coded data form the basis of many of our analyses in corpus linguistics. It is thus imperative that we work towards increased reliability and enhanced transparency in our coding practices, since failing to do so may ultimately lead us to draw erroneous conclusions about language. Using spoken data from a study on adverb usage for illustration, this methods paper discusses some strategies for identifying threats to the reliability of our coding and offers suggestions for how to mitigate these and ensure that our coding can be assessed and replicated. The paper also includes suggestions for best practices for manual linguistic coding and concludes with a discussion of the benefits of such practices. With this paper, we expand on the ongoing discussions in the field on issues of reliability and transparency as they relate to manual coding. We argue that while tests of inter-rater reliability offer a helpful starting point, further steps are needed to ensure increased reliability and transparency.","PeriodicalId":44933,"journal":{"name":"Corpora","volume":null,"pages":null},"PeriodicalIF":0.8000,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Corpora","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3366/cor.2023.0284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"LINGUISTICS","Score":null,"Total":0}
引用次数: 1
Abstract
Manually coded data form the basis of many of our analyses in corpus linguistics. It is thus imperative that we work towards increased reliability and enhanced transparency in our coding practices, since failing to do so may ultimately lead us to draw erroneous conclusions about language. Using spoken data from a study on adverb usage for illustration, this methods paper discusses some strategies for identifying threats to the reliability of our coding and offers suggestions for how to mitigate these and ensure that our coding can be assessed and replicated. The paper also includes suggestions for best practices for manual linguistic coding and concludes with a discussion of the benefits of such practices. With this paper, we expand on the ongoing discussions in the field on issues of reliability and transparency as they relate to manual coding. We argue that while tests of inter-rater reliability offer a helpful starting point, further steps are needed to ensure increased reliability and transparency.