Nathaniel Haines,Peter D Kvam,Louis Irving,Colin Tucker Smith,Theodore P Beauchaine,Mark A Pitt,Woo-Young Ahn,Brandon M Turner
{"title":"A tutorial on using generative models to advance psychological science: Lessons from the reliability paradox.","authors":"Nathaniel Haines,Peter D Kvam,Louis Irving,Colin Tucker Smith,Theodore P Beauchaine,Mark A Pitt,Woo-Young Ahn,Brandon M Turner","doi":"10.1037/met0000674","DOIUrl":null,"url":null,"abstract":"Theories of individual differences are foundational to psychological and brain sciences, yet they are traditionally developed and tested using superficial summaries of data (e.g., mean response times) that are disconnected from our otherwise rich conceptual theories of behavior. To resolve this theory-description gap, we review the generative modeling approach, which involves formally specifying how behavior is generated within individuals, and in turn how generative mechanisms vary across individuals. Generative modeling shifts our focus away from estimating descriptive statistical \"effects\" toward estimating psychologically interpretable parameters, while simultaneously enhancing the reliability and validity of our measures. We demonstrate the utility of generative modeling in the context of the \"reliability paradox,\" a phenomenon wherein replicable group effects (e.g., Stroop effect) fail to capture individual differences (e.g., low test-retest reliability). Simulations and empirical data from the Implicit Association Test and Stroop, Flanker, Posner, and delay discounting tasks show that generative models yield (a) more theoretically informative parameters, and (b) higher test-retest reliability estimates relative to traditional approaches, illustrating their potential for enhancing theory development. (PsycInfo Database Record (c) 2025 APA, all rights reserved).","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"108 1","pages":""},"PeriodicalIF":7.6000,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000674","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Theories of individual differences are foundational to psychological and brain sciences, yet they are traditionally developed and tested using superficial summaries of data (e.g., mean response times) that are disconnected from our otherwise rich conceptual theories of behavior. To resolve this theory-description gap, we review the generative modeling approach, which involves formally specifying how behavior is generated within individuals, and in turn how generative mechanisms vary across individuals. Generative modeling shifts our focus away from estimating descriptive statistical "effects" toward estimating psychologically interpretable parameters, while simultaneously enhancing the reliability and validity of our measures. We demonstrate the utility of generative modeling in the context of the "reliability paradox," a phenomenon wherein replicable group effects (e.g., Stroop effect) fail to capture individual differences (e.g., low test-retest reliability). Simulations and empirical data from the Implicit Association Test and Stroop, Flanker, Posner, and delay discounting tasks show that generative models yield (a) more theoretically informative parameters, and (b) higher test-retest reliability estimates relative to traditional approaches, illustrating their potential for enhancing theory development. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.