{"title":"The use of intercoder reliability in qualitative interview data analysis in science education","authors":"Kason Ka Ching Cheung, Kevin W. H. Tai","doi":"10.1080/02635143.2021.1993179","DOIUrl":null,"url":null,"abstract":"ABSTRACT Background Intercoder reliability is a statistic commonly reported by researchers to demonstrate the rigour of coding procedures during data analysis. Its importance is debatable in the analysis of qualitative interview data. It raises a question on whether researchers should identify the same codes and themes in a transcript or they should produce different accounts in analyzing the transcript. Purpose This study reports how articles in four science education journals, International Journal of Science Education, Research in Science Education, Journal of Research in Science Teaching and Science Education report intercoder reliability in their analysis of interview data. Methods This article explores whether 103 papers published in these science education journals in a single year (2019) have reported intercoder reliability test when the authors analyse their interview data. It was found that 19 papers have reported the test results. Findings The authors of these studies have different interpretation towards a similar value of intercoder reliability. Moreover, the percentage of data used in the intercoder reliability test and the identity of intercoder vary across the studies. As a result, this paper aims to raise an issue on whether a replicability of coding can show the reliability of the results when researchers analyze interview data. Conclusion We propose two major principles when authors report the reliability of the analysis of interview data: transparency and explanatory. We also argue that only when the authors report intercoder reliability test results that are based on these two principles, the reliability statistics of studies are convincing to readers. Some suggestions are offered to authors regarding how to carry out, analyze and report the intercoder reliability test.","PeriodicalId":46656,"journal":{"name":"Research in Science & Technological Education","volume":"41 1","pages":"1155 - 1175"},"PeriodicalIF":1.8000,"publicationDate":"2021-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research in Science & Technological Education","FirstCategoryId":"95","ListUrlMain":"https://doi.org/10.1080/02635143.2021.1993179","RegionNum":4,"RegionCategory":"教育学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"EDUCATION & EDUCATIONAL RESEARCH","Score":null,"Total":0}
引用次数: 23
Abstract
ABSTRACT Background Intercoder reliability is a statistic commonly reported by researchers to demonstrate the rigour of coding procedures during data analysis. Its importance is debatable in the analysis of qualitative interview data. It raises a question on whether researchers should identify the same codes and themes in a transcript or they should produce different accounts in analyzing the transcript. Purpose This study reports how articles in four science education journals, International Journal of Science Education, Research in Science Education, Journal of Research in Science Teaching and Science Education report intercoder reliability in their analysis of interview data. Methods This article explores whether 103 papers published in these science education journals in a single year (2019) have reported intercoder reliability test when the authors analyse their interview data. It was found that 19 papers have reported the test results. Findings The authors of these studies have different interpretation towards a similar value of intercoder reliability. Moreover, the percentage of data used in the intercoder reliability test and the identity of intercoder vary across the studies. As a result, this paper aims to raise an issue on whether a replicability of coding can show the reliability of the results when researchers analyze interview data. Conclusion We propose two major principles when authors report the reliability of the analysis of interview data: transparency and explanatory. We also argue that only when the authors report intercoder reliability test results that are based on these two principles, the reliability statistics of studies are convincing to readers. Some suggestions are offered to authors regarding how to carry out, analyze and report the intercoder reliability test.