Enough?

Observational studies Pub Date : 2025-04-11 eCollection Date: 2025-01-01 DOI:10.1353/obs.2025.a956838
Drew Dimmery, Kevin Munger
{"title":"Enough?","authors":"Drew Dimmery, Kevin Munger","doi":"10.1353/obs.2025.a956838","DOIUrl":null,"url":null,"abstract":"<p><p>We provide a critical response to Aronow et al. (2021) which argued that randomized controlled trials (RCTs) are \"enough,\" while nonparametric identification in observational studies is not. We first investigate what is meant by \"enough,\" arguing that this is a fundamentally a sociological claim about the relationship between statistical work and relevant institutional processes (here, academic peer review), rather than something that can be decided from within the logic of statistics. For a more complete conception of \"enough,\" we outline all that would need to be known - not just knowledge of propensity scores, but knowledge of many other spatial and temporal characteristics of the social world. Even granting the logic of the critique in Aronow et al. (2021), its practical importance is a question of the contexts under study. We argue that we should not be satisfied by appeals to intuition or experience about the complexity of \"naturally occurring\" propensity score functions. Instead, we call for more empirical metascience to begin to characterize this complexity. We apply this logic to the case of recommender systems as a demonstration of the weakness of allowing statisticians' intuitions to serve in place of metascientific data. This may be, as Aronow et al. (2021) claim, one of the \"few free lunches in statistics\"-but like many of the free lunches consumed by statisticians, it is only available to those working at a handful of large tech firms. Rather than implicitly deciding what is \"enough\" based on statistical applications the social world has determined to be most profitable, we are argue that practicing statisticians should explicitly engage with questions like \"for what?\" and \"for whom?\" in order to adequately answer the question of \"enough?\"</p>","PeriodicalId":74335,"journal":{"name":"Observational studies","volume":"11 1","pages":"17-26"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12139716/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Observational studies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1353/obs.2025.a956838","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

We provide a critical response to Aronow et al. (2021) which argued that randomized controlled trials (RCTs) are "enough," while nonparametric identification in observational studies is not. We first investigate what is meant by "enough," arguing that this is a fundamentally a sociological claim about the relationship between statistical work and relevant institutional processes (here, academic peer review), rather than something that can be decided from within the logic of statistics. For a more complete conception of "enough," we outline all that would need to be known - not just knowledge of propensity scores, but knowledge of many other spatial and temporal characteristics of the social world. Even granting the logic of the critique in Aronow et al. (2021), its practical importance is a question of the contexts under study. We argue that we should not be satisfied by appeals to intuition or experience about the complexity of "naturally occurring" propensity score functions. Instead, we call for more empirical metascience to begin to characterize this complexity. We apply this logic to the case of recommender systems as a demonstration of the weakness of allowing statisticians' intuitions to serve in place of metascientific data. This may be, as Aronow et al. (2021) claim, one of the "few free lunches in statistics"-but like many of the free lunches consumed by statisticians, it is only available to those working at a handful of large tech firms. Rather than implicitly deciding what is "enough" based on statistical applications the social world has determined to be most profitable, we are argue that practicing statisticians should explicitly engage with questions like "for what?" and "for whom?" in order to adequately answer the question of "enough?"

足够了吗?
Aronow等人(2021)认为随机对照试验(rct)“足够”,而观察性研究中的非参数识别则不够。我们首先调查了“足够”的含义,认为这基本上是一个关于统计工作与相关制度过程(这里是学术同行评审)之间关系的社会学主张,而不是可以从统计逻辑中决定的东西。为了得到一个更完整的“足够”的概念,我们概述了所有需要知道的东西——不仅仅是倾向得分的知识,还有社会世界的许多其他空间和时间特征的知识。即使承认Aronow等人(2021)的批评逻辑,其实际重要性也是研究背景的问题。我们认为,对于“自然发生的”倾向得分函数的复杂性,我们不应该满足于诉诸直觉或经验。相反,我们需要更多的经验元科学来开始描述这种复杂性。我们将这种逻辑应用到推荐系统的案例中,以证明允许统计学家的直觉代替元科学数据的弱点。正如Aronow等人(2021)所声称的那样,这可能是“统计学中为数不多的免费午餐”之一——但就像许多统计学家享用的免费午餐一样,它只适用于少数几家大型科技公司的员工。我们认为,实践统计学家应该明确地参与诸如“为了什么?”和“为了谁?”这样的问题,以便充分回答“足够”的问题,而不是根据社会确定的最有利可图的统计应用来隐含地决定什么是“足够”。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
0.80
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信