Comparing χ2 Tables for Separability of Distribution and Effect: Meta-Tests for Comparing Homogeneity and Goodness of Fit Contingency Test Outcomes

IF 0.7 2区 文学 0 LANGUAGE & LINGUISTICS
S. Wallis
{"title":"Comparing χ2 Tables for Separability of Distribution and Effect: Meta-Tests for Comparing Homogeneity and Goodness of Fit Contingency Test Outcomes","authors":"S. Wallis","doi":"10.1080/09296174.2018.1496537","DOIUrl":null,"url":null,"abstract":"ABSTRACT This paper describes a series of statistical meta-tests for comparing independent contingency tables for different types of significant difference. Recognizing when an experiment obtains a significantly different result and when it does not is frequently overlooked in research publication. Papers are frequently published citing ‘p values’ or test scores suggesting a ‘stronger effect’ substituting for sound statistical reasoning. This paper sets out a series of tests that together illustrate the correct approach to this question. These meta-tests permit us to evaluate whether experiments have failed to replicate on new data; whether a particular data source or subcorpus obtains a significantly different result than another; or whether changing experimental parameters obtains a stronger effect. The meta-tests are derived mathematically from the χ2 test and the Wilson score interval, and consist of pairwise ‘point’ tests, ‘homogeneity’ tests and ‘goodness of fit’ tests. Meta-tests for comparing tests with one degree of freedom (e.g. ‘2 × 1ʹ and ‘2 × 2ʹ tests) are generalized to those of arbitrary size. Finally, we compare our approach with a competing approach offered by Zar, which, while straightforward to calculate, turns out to be both less powerful and less robust. (Note: A spreadsheet including all the tests in this paper is publicly available at www.ucl.ac.uk/english-usage/statspapers/2x2-x2-separability.xls.)","PeriodicalId":45514,"journal":{"name":"Journal of Quantitative Linguistics","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2019-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/09296174.2018.1496537","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Quantitative Linguistics","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/09296174.2018.1496537","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 2

Abstract

ABSTRACT This paper describes a series of statistical meta-tests for comparing independent contingency tables for different types of significant difference. Recognizing when an experiment obtains a significantly different result and when it does not is frequently overlooked in research publication. Papers are frequently published citing ‘p values’ or test scores suggesting a ‘stronger effect’ substituting for sound statistical reasoning. This paper sets out a series of tests that together illustrate the correct approach to this question. These meta-tests permit us to evaluate whether experiments have failed to replicate on new data; whether a particular data source or subcorpus obtains a significantly different result than another; or whether changing experimental parameters obtains a stronger effect. The meta-tests are derived mathematically from the χ2 test and the Wilson score interval, and consist of pairwise ‘point’ tests, ‘homogeneity’ tests and ‘goodness of fit’ tests. Meta-tests for comparing tests with one degree of freedom (e.g. ‘2 × 1ʹ and ‘2 × 2ʹ tests) are generalized to those of arbitrary size. Finally, we compare our approach with a competing approach offered by Zar, which, while straightforward to calculate, turns out to be both less powerful and less robust. (Note: A spreadsheet including all the tests in this paper is publicly available at www.ucl.ac.uk/english-usage/statspapers/2x2-x2-separability.xls.)
比较分布和效果可分离性的χ2表:比较偶然性检验结果的同质性和拟合优度的元检验
摘要本文介绍了一系列统计元检验,用于比较不同类型显著性差异的独立列联表。在研究出版物中,对实验何时获得显著不同结果以及何时没有显著不同结果的认识经常被忽视。论文经常引用“p值”或测试分数来代替可靠的统计推理,表明“更强的效应”。本文列出了一系列测试,共同说明了解决这个问题的正确方法。这些元测试允许我们评估实验是否无法在新数据上复制;一个特定的数据源或子语料库是否获得与其他数据源或子语料库明显不同的结果;或者改变实验参数是否能获得更强的效果。元检验由χ2检验和Wilson评分区间数学推导而来,由两两“点”检验、“同质性”检验和“拟合优度”检验组成。用于比较具有一个自由度的检验(例如' 2 × 1 '和' 2 × 2 '检验)的元检验可以推广到任意大小的检验。最后,我们将我们的方法与Zar提供的竞争方法进行了比较,后者虽然易于计算,但功能较弱且鲁棒性较差。(注:包含本文中所有测试的电子表格可在www.ucl.ac.uk/english-usage/statspapers/2x2-x2-separability.xls上公开获取。)
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.90
自引率
7.10%
发文量
7
期刊介绍: The Journal of Quantitative Linguistics is an international forum for the publication and discussion of research on the quantitative characteristics of language and text in an exact mathematical form. This approach, which is of growing interest, opens up important and exciting theoretical perspectives, as well as solutions for a wide range of practical problems such as machine learning or statistical parsing, by introducing into linguistics the methods and models of advanced scientific disciplines such as the natural sciences, economics, and psychology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信