Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic "shadow" biases in participant samples.

IF 3.4 2区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL
Emma M Siritzky, Patrick H Cox, Sydni M Nadler, Justin N Grady, Dwight J Kravitz, Stephen R Mitroff
{"title":"Standard experimental paradigm designs and data exclusion practices in cognitive psychology can inadvertently introduce systematic \"shadow\" biases in participant samples.","authors":"Emma M Siritzky, Patrick H Cox, Sydni M Nadler, Justin N Grady, Dwight J Kravitz, Stephen R Mitroff","doi":"10.1186/s41235-023-00520-y","DOIUrl":null,"url":null,"abstract":"<p><p>Standard cognitive psychology research practices can introduce inadvertent sampling biases that reduce the reliability and generalizability of the findings. Researchers commonly acknowledge and understand that any given study sample is not perfectly generalizable, especially when implementing typical experimental constraints (e.g., limiting recruitment to specific age ranges or to individuals with normal color vision). However, less obvious systematic sampling constraints, referred to here as \"shadow\" biases, can be unintentionally introduced and can easily go unnoticed. For example, many standard cognitive psychology study designs involve lengthy and tedious experiments with simple, repetitive stimuli. Such testing environments may 1) be aversive to some would-be participants (e.g., those high in certain neurodivergent symptoms) who may self-select not to enroll in such studies, or 2) contribute to participant attrition, both of which reduce the sample's representativeness. Likewise, standard performance-based data exclusion efforts (e.g., minimum accuracy or response time) or attention checks can systematically remove data from participants from subsets of the population (e.g., those low in conscientiousness). This commentary focuses on the theoretical and practical issues behind these non-obvious and often unacknowledged \"shadow\" biases, offers a simple illustration with real data as a proof of concept of how applying attention checks can systematically skew latent/hidden variables in the included population, and then discusses the broader implications with suggestions for how to manage and reduce, or at a minimum acknowledge, the problem.</p>","PeriodicalId":46827,"journal":{"name":"Cognitive Research-Principles and Implications","volume":"8 1","pages":"66"},"PeriodicalIF":3.4000,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10590344/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Research-Principles and Implications","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1186/s41235-023-00520-y","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

Abstract

Standard cognitive psychology research practices can introduce inadvertent sampling biases that reduce the reliability and generalizability of the findings. Researchers commonly acknowledge and understand that any given study sample is not perfectly generalizable, especially when implementing typical experimental constraints (e.g., limiting recruitment to specific age ranges or to individuals with normal color vision). However, less obvious systematic sampling constraints, referred to here as "shadow" biases, can be unintentionally introduced and can easily go unnoticed. For example, many standard cognitive psychology study designs involve lengthy and tedious experiments with simple, repetitive stimuli. Such testing environments may 1) be aversive to some would-be participants (e.g., those high in certain neurodivergent symptoms) who may self-select not to enroll in such studies, or 2) contribute to participant attrition, both of which reduce the sample's representativeness. Likewise, standard performance-based data exclusion efforts (e.g., minimum accuracy or response time) or attention checks can systematically remove data from participants from subsets of the population (e.g., those low in conscientiousness). This commentary focuses on the theoretical and practical issues behind these non-obvious and often unacknowledged "shadow" biases, offers a simple illustration with real data as a proof of concept of how applying attention checks can systematically skew latent/hidden variables in the included population, and then discusses the broader implications with suggestions for how to manage and reduce, or at a minimum acknowledge, the problem.

Abstract Image

Abstract Image

认知心理学中的标准实验范式设计和数据排除实践可能会在参与者样本中无意中引入系统的“影子”偏见。
标准的认知心理学研究实践可能会引入无意的抽样偏差,从而降低研究结果的可靠性和可推广性。研究人员通常承认并理解,任何给定的研究样本都不是完全可推广的,尤其是在实施典型的实验约束时(例如,将招募限制在特定年龄段或色觉正常的个体)。然而,不太明显的系统采样约束,在这里被称为“影子”偏差,可能会被无意中引入,很容易被忽视。例如,许多标准的认知心理学研究设计都涉及到用简单、重复的刺激进行冗长乏味的实验。这样的测试环境可能1)对一些潜在参与者(例如,某些神经分化症状严重的参与者)来说是厌恶的,他们可能会自行选择不参加此类研究,或者2)导致参与者流失,这两种情况都会降低样本的代表性。同样,标准的基于绩效的数据排除工作(例如,最低准确性或响应时间)或注意力检查可以系统地从人群的子集(例如,那些认真程度低的人)中删除参与者的数据。这篇评论聚焦于这些不明显且往往未被承认的“影子”偏见背后的理论和实践问题,用真实数据提供了一个简单的例子,作为应用注意力检查如何系统地扭曲被纳入人群中的潜在/隐藏变量的概念证明,然后讨论了更广泛的影响,并就如何管理和减少,或者至少承认问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
6.80
自引率
7.30%
发文量
96
审稿时长
25 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信