Nathaniel Haines,Peter D Kvam,Louis Irving,Colin Tucker Smith,Theodore P Beauchaine,Mark A Pitt,Woo-Young Ahn,Brandon M Turner
{"title":"使用生成模型推进心理科学的教程:可靠性悖论的启示。","authors":"Nathaniel Haines,Peter D Kvam,Louis Irving,Colin Tucker Smith,Theodore P Beauchaine,Mark A Pitt,Woo-Young Ahn,Brandon M Turner","doi":"10.1037/met0000674","DOIUrl":null,"url":null,"abstract":"Theories of individual differences are foundational to psychological and brain sciences, yet they are traditionally developed and tested using superficial summaries of data (e.g., mean response times) that are disconnected from our otherwise rich conceptual theories of behavior. To resolve this theory-description gap, we review the generative modeling approach, which involves formally specifying how behavior is generated within individuals, and in turn how generative mechanisms vary across individuals. Generative modeling shifts our focus away from estimating descriptive statistical \"effects\" toward estimating psychologically interpretable parameters, while simultaneously enhancing the reliability and validity of our measures. We demonstrate the utility of generative modeling in the context of the \"reliability paradox,\" a phenomenon wherein replicable group effects (e.g., Stroop effect) fail to capture individual differences (e.g., low test-retest reliability). Simulations and empirical data from the Implicit Association Test and Stroop, Flanker, Posner, and delay discounting tasks show that generative models yield (a) more theoretically informative parameters, and (b) higher test-retest reliability estimates relative to traditional approaches, illustrating their potential for enhancing theory development. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"108 1","pages":""},"PeriodicalIF":7.6000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A tutorial on using generative models to advance psychological science: Lessons from the reliability paradox.\",\"authors\":\"Nathaniel Haines,Peter D Kvam,Louis Irving,Colin Tucker Smith,Theodore P Beauchaine,Mark A Pitt,Woo-Young Ahn,Brandon M Turner\",\"doi\":\"10.1037/met0000674\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Theories of individual differences are foundational to psychological and brain sciences, yet they are traditionally developed and tested using superficial summaries of data (e.g., mean response times) that are disconnected from our otherwise rich conceptual theories of behavior. To resolve this theory-description gap, we review the generative modeling approach, which involves formally specifying how behavior is generated within individuals, and in turn how generative mechanisms vary across individuals. Generative modeling shifts our focus away from estimating descriptive statistical \\\"effects\\\" toward estimating psychologically interpretable parameters, while simultaneously enhancing the reliability and validity of our measures. We demonstrate the utility of generative modeling in the context of the \\\"reliability paradox,\\\" a phenomenon wherein replicable group effects (e.g., Stroop effect) fail to capture individual differences (e.g., low test-retest reliability). Simulations and empirical data from the Implicit Association Test and Stroop, Flanker, Posner, and delay discounting tasks show that generative models yield (a) more theoretically informative parameters, and (b) higher test-retest reliability estimates relative to traditional approaches, illustrating their potential for enhancing theory development. (PsycInfo Database Record (c) 2025 APA, all rights reserved).\",\"PeriodicalId\":20782,\"journal\":{\"name\":\"Psychological methods\",\"volume\":\"108 1\",\"pages\":\"\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2025-04-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological methods\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/met0000674\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000674","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
摘要
个体差异理论是心理学和脑科学的基础,但它们传统上是使用肤浅的数据摘要(例如,平均反应时间)来发展和测试的,这与我们丰富的行为概念理论是脱节的。为了解决这一理论与描述的差距,我们回顾了生成建模方法,该方法涉及正式指定个体内部如何生成行为,以及个体之间生成机制的差异。生成模型将我们的注意力从估计描述性统计“效果”转移到估计心理上可解释的参数,同时增强了我们测量的可靠性和有效性。我们在“可靠性悖论”的背景下展示了生成模型的效用,这是一种可复制的群体效应(例如,Stroop效应)无法捕捉个体差异(例如,低测试-重测试可靠性)的现象。来自内隐关联测试、Stroop、Flanker、Posner和延迟贴现任务的模拟和经验数据表明,生成模型产生(a)与传统方法相比,有更多的理论信息参数,(b)更高的测试-重测信度估计,说明了它们促进理论发展的潜力。(PsycInfo Database Record (c) 2025 APA,版权所有)。
A tutorial on using generative models to advance psychological science: Lessons from the reliability paradox.
Theories of individual differences are foundational to psychological and brain sciences, yet they are traditionally developed and tested using superficial summaries of data (e.g., mean response times) that are disconnected from our otherwise rich conceptual theories of behavior. To resolve this theory-description gap, we review the generative modeling approach, which involves formally specifying how behavior is generated within individuals, and in turn how generative mechanisms vary across individuals. Generative modeling shifts our focus away from estimating descriptive statistical "effects" toward estimating psychologically interpretable parameters, while simultaneously enhancing the reliability and validity of our measures. We demonstrate the utility of generative modeling in the context of the "reliability paradox," a phenomenon wherein replicable group effects (e.g., Stroop effect) fail to capture individual differences (e.g., low test-retest reliability). Simulations and empirical data from the Implicit Association Test and Stroop, Flanker, Posner, and delay discounting tasks show that generative models yield (a) more theoretically informative parameters, and (b) higher test-retest reliability estimates relative to traditional approaches, illustrating their potential for enhancing theory development. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.