单例元分析中数据提取的互码可靠性与效应测度计算的关系

IF 3 3区 心理学 Q1 Social Sciences
Daniel D. Drevon, Allison M. Peart, Elizabeth T. Koval
{"title":"单例元分析中数据提取的互码可靠性与效应测度计算的关系","authors":"Daniel D. Drevon, Allison M. Peart, Elizabeth T. Koval","doi":"10.1080/2372966x.2023.2273822","DOIUrl":null,"url":null,"abstract":"AbstractMeta-analyzing data from single-case experimental designs (SCEDs) usually requires data extraction, a process by which numerical values are obtained from linear graphs in primary studies, prior to calculating and aggregating single-case effect measures. Existing research suggests data extraction yields reliable and valid data; however, we have an incomplete understanding of the downstream effects of relying on data extracted by two or more people. This study was undertaken to enhance that understanding in the context of SCEDs published in school psychology journals. Data for 91 unique outcomes across 67 cases in 20 SCEDs were extracted by two data extractors. Four different single-case effect measures were calculated using data extracted by each data extractor and then compared to determine the similarity of the effect measures. Overall, intercoder reliability metrics suggested a high degree of agreement, and there were minimal differences in single-case effect measures calculated from data extracted by different researchers. Intercoder reliability metrics and differences in single-case effect measures were generally negatively related, though the strength varied depending on the single-case effect measure. Hence, it is unlikely that the small differences in effect measure estimates due to the slight unreliability of the data extraction process would have a considerable impact on the interpretation of single-case effect measures.Impact StatementTwo people extracted highly similar numerical data from the same linear graphs using plot digitizing software. Differences in calculations across data extracted by two people were trivial. Results suggest researchers can likely have confidence in the calculation of effect measures aggregated in meta-analyses of single-case experimental designs, provided they achieve comparable levels of agreement amongst data extractors.Keywords: single subject designsmeta-analysisresearch methodsASSOCIATE EDITOR: Jorge E. Gonzalez DISCLOSUREThe authors have no conflicts of interest to report.Open ScholarshipThis article has earned the Center for Open Science badges for Open Data and Open Materials through Open Practices Disclosure. The data and materials are openly accessible at https://osf.io/249w7/ and https://osf.io/249w7/. To obtain the author's disclosure form, please contact the Editor.Additional informationFundingThis study was supported by the Faculty Research and Creative Endeavors committee at Central Michigan University.Notes on contributorsDaniel D. DrevonDaniel D. Drevon, PhD, is an Associate Professor and Program Director with the School Psychology Program at Central Michigan University. He is interested in academic and behavioral interventions, single-case experimental design, and research synthesis/meta-analysis.Allison M. PeartAllison M. Peart, MA, is a doctoral candidate in the School Psychology Program at Central Michigan University and predoctoral intern at the University of Nebraska Medical Center’s Munroe-Meyer Institute. She is interested in behavioral interventions, single-case experimental design, and research synthesis/meta-analysis.Elizabeth T. KovalElizabeth T. Koval, MA, is a doctoral candidate in the School Psychology Program at Central Michigan University. She is interested in behavioral interventions and teacher consultation.","PeriodicalId":21555,"journal":{"name":"School Psychology Review","volume":null,"pages":null},"PeriodicalIF":3.0000,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Relationship between Intercoder Reliability of Data Extraction and Effect Measure Calculation in Single-Case Meta-Analysis\",\"authors\":\"Daniel D. Drevon, Allison M. Peart, Elizabeth T. Koval\",\"doi\":\"10.1080/2372966x.2023.2273822\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"AbstractMeta-analyzing data from single-case experimental designs (SCEDs) usually requires data extraction, a process by which numerical values are obtained from linear graphs in primary studies, prior to calculating and aggregating single-case effect measures. Existing research suggests data extraction yields reliable and valid data; however, we have an incomplete understanding of the downstream effects of relying on data extracted by two or more people. This study was undertaken to enhance that understanding in the context of SCEDs published in school psychology journals. Data for 91 unique outcomes across 67 cases in 20 SCEDs were extracted by two data extractors. Four different single-case effect measures were calculated using data extracted by each data extractor and then compared to determine the similarity of the effect measures. Overall, intercoder reliability metrics suggested a high degree of agreement, and there were minimal differences in single-case effect measures calculated from data extracted by different researchers. Intercoder reliability metrics and differences in single-case effect measures were generally negatively related, though the strength varied depending on the single-case effect measure. Hence, it is unlikely that the small differences in effect measure estimates due to the slight unreliability of the data extraction process would have a considerable impact on the interpretation of single-case effect measures.Impact StatementTwo people extracted highly similar numerical data from the same linear graphs using plot digitizing software. Differences in calculations across data extracted by two people were trivial. Results suggest researchers can likely have confidence in the calculation of effect measures aggregated in meta-analyses of single-case experimental designs, provided they achieve comparable levels of agreement amongst data extractors.Keywords: single subject designsmeta-analysisresearch methodsASSOCIATE EDITOR: Jorge E. Gonzalez DISCLOSUREThe authors have no conflicts of interest to report.Open ScholarshipThis article has earned the Center for Open Science badges for Open Data and Open Materials through Open Practices Disclosure. The data and materials are openly accessible at https://osf.io/249w7/ and https://osf.io/249w7/. To obtain the author's disclosure form, please contact the Editor.Additional informationFundingThis study was supported by the Faculty Research and Creative Endeavors committee at Central Michigan University.Notes on contributorsDaniel D. DrevonDaniel D. Drevon, PhD, is an Associate Professor and Program Director with the School Psychology Program at Central Michigan University. He is interested in academic and behavioral interventions, single-case experimental design, and research synthesis/meta-analysis.Allison M. PeartAllison M. Peart, MA, is a doctoral candidate in the School Psychology Program at Central Michigan University and predoctoral intern at the University of Nebraska Medical Center’s Munroe-Meyer Institute. She is interested in behavioral interventions, single-case experimental design, and research synthesis/meta-analysis.Elizabeth T. KovalElizabeth T. Koval, MA, is a doctoral candidate in the School Psychology Program at Central Michigan University. She is interested in behavioral interventions and teacher consultation.\",\"PeriodicalId\":21555,\"journal\":{\"name\":\"School Psychology Review\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-10-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"School Psychology Review\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/2372966x.2023.2273822\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"School Psychology Review","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/2372966x.2023.2273822","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

摘要

摘要单例实验设计(SCEDs)的meta分析数据通常需要数据提取,即在初始研究中从线性图中获得数值,然后再计算和汇总单例效应测量值。现有研究表明,数据提取产生可靠和有效的数据;然而,我们对依赖两个人或更多人提取的数据的下游效应的理解并不完全。本研究旨在加强对学校心理学期刊上发表的sced的理解。通过两个数据提取器提取20例sced中67例的91个独特结局的数据。使用每个数据提取器提取的数据计算四种不同的单例效应度量,然后进行比较以确定效应度量的相似性。总体而言,互码器可靠性指标表明高度一致,并且从不同研究人员提取的数据中计算出的单例效应测量值差异极小。互码器可靠性指标和单例效应测量的差异通常呈负相关,尽管强度因单例效应测量而异。因此,由于数据提取过程的轻微不可靠性而导致的效应测度估计的微小差异不太可能对单例效应测度的解释产生重大影响。影响陈述两个人使用绘图数字化软件从相同的线性图表中提取出高度相似的数字数据。两个人提取的数据在计算上的差异微不足道。研究结果表明,如果研究人员在数据提取者之间达到可比较的一致水平,他们可能对在单例实验设计的荟萃分析中汇总的效应测量的计算有信心。关键词:单受试者设计meta分析研究方法副主编:Jorge E. Gonzalez披露作者无利益冲突报告。本文通过开放实践披露获得了开放数据和开放材料的开放科学中心徽章。数据和材料可在https://osf.io/249w7/和https://osf.io/249w7/上公开获取。如欲获取作者披露表,请与编辑联系。本研究得到了中密歇根大学教师研究和创新努力委员会的支持。daniel D. Drevon,博士,是中密歇根大学学校心理学项目的副教授和项目主任。他对学术和行为干预、单例实验设计和研究综合/元分析感兴趣。Allison M. pearart,文学硕士,中密歇根大学学校心理学项目的博士候选人,内布拉斯加州大学医学中心门罗-迈耶研究所的博士前实习生。她对行为干预、单例实验设计和研究综合/元分析感兴趣。Elizabeth T. Koval, MA,是中密歇根大学学校心理学项目的博士候选人。她对行为干预和教师咨询感兴趣。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Relationship between Intercoder Reliability of Data Extraction and Effect Measure Calculation in Single-Case Meta-Analysis
AbstractMeta-analyzing data from single-case experimental designs (SCEDs) usually requires data extraction, a process by which numerical values are obtained from linear graphs in primary studies, prior to calculating and aggregating single-case effect measures. Existing research suggests data extraction yields reliable and valid data; however, we have an incomplete understanding of the downstream effects of relying on data extracted by two or more people. This study was undertaken to enhance that understanding in the context of SCEDs published in school psychology journals. Data for 91 unique outcomes across 67 cases in 20 SCEDs were extracted by two data extractors. Four different single-case effect measures were calculated using data extracted by each data extractor and then compared to determine the similarity of the effect measures. Overall, intercoder reliability metrics suggested a high degree of agreement, and there were minimal differences in single-case effect measures calculated from data extracted by different researchers. Intercoder reliability metrics and differences in single-case effect measures were generally negatively related, though the strength varied depending on the single-case effect measure. Hence, it is unlikely that the small differences in effect measure estimates due to the slight unreliability of the data extraction process would have a considerable impact on the interpretation of single-case effect measures.Impact StatementTwo people extracted highly similar numerical data from the same linear graphs using plot digitizing software. Differences in calculations across data extracted by two people were trivial. Results suggest researchers can likely have confidence in the calculation of effect measures aggregated in meta-analyses of single-case experimental designs, provided they achieve comparable levels of agreement amongst data extractors.Keywords: single subject designsmeta-analysisresearch methodsASSOCIATE EDITOR: Jorge E. Gonzalez DISCLOSUREThe authors have no conflicts of interest to report.Open ScholarshipThis article has earned the Center for Open Science badges for Open Data and Open Materials through Open Practices Disclosure. The data and materials are openly accessible at https://osf.io/249w7/ and https://osf.io/249w7/. To obtain the author's disclosure form, please contact the Editor.Additional informationFundingThis study was supported by the Faculty Research and Creative Endeavors committee at Central Michigan University.Notes on contributorsDaniel D. DrevonDaniel D. Drevon, PhD, is an Associate Professor and Program Director with the School Psychology Program at Central Michigan University. He is interested in academic and behavioral interventions, single-case experimental design, and research synthesis/meta-analysis.Allison M. PeartAllison M. Peart, MA, is a doctoral candidate in the School Psychology Program at Central Michigan University and predoctoral intern at the University of Nebraska Medical Center’s Munroe-Meyer Institute. She is interested in behavioral interventions, single-case experimental design, and research synthesis/meta-analysis.Elizabeth T. KovalElizabeth T. Koval, MA, is a doctoral candidate in the School Psychology Program at Central Michigan University. She is interested in behavioral interventions and teacher consultation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
School Psychology Review
School Psychology Review Social Sciences-Education
CiteScore
6.90
自引率
20.00%
发文量
54
期刊介绍: School Psychology Review (SPR) is a refereed journal published quarterly by NASP. Its primary purpose is to provide a means for communicating scholarly advances in research, training, and practice related to psychology and education, and specifically to school psychology. Of particular interest are articles presenting original, data-based research that can contribute to the development of innovative intervention and prevention strategies and the evaluation of these approaches. SPR presents important conceptual developments and empirical findings from a wide range of disciplines (e.g., educational, child clinical, pediatric, community.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信