Brett W. Gelino, Rebekah D. Schlitzer, Derek D. Reed, Justin C. Strickland
{"title":"A systematic review and meta-analysis of test–retest reliability and stability of delay and probability discounting","authors":"Brett W. Gelino, Rebekah D. Schlitzer, Derek D. Reed, Justin C. Strickland","doi":"10.1002/jeab.910","DOIUrl":null,"url":null,"abstract":"<p>In this meta-analysis, we describe a benchmark value of delay and probability discounting reliability and stability that might be used to (a) evaluate the meaningfulness of clinically achieved changes in discounting and (b) support the role of discounting as a valid and enduring measure of intertemporal choice. We examined test–retest reliability, stability effect sizes (<i>d</i><sub>z</sub>; Cohen, 1992), and relevant moderators across 30 publications comprising 39 independent samples and 262 measures of discounting, identified via a systematic review of PsychInfo, PubMed, and Google Scholar databases. We calculated omnibus effect-size estimates and evaluated the role of proposed moderators using a robust variance estimation meta-regression method. The meta-regression output reflected modest test–retest reliability, <i>r</i> = .670, <i>p</i> < .001, 95% CI [.618, .716]. Discounting was most reliable when measured in the context of temporal constraints, in adult respondents, when using money as a medium, and when reassessed within 1 month. Testing also suggested acceptable stability via nonsignificant and small changes in effect magnitude over time, <i>d</i><sub>z</sub> = 0.048, <i>p</i> = .31, 95% CI [−0.051, 0.146]. Clinicians and researchers seeking to measure discounting can consider the contexts when reliability is maximized for specific cases.</p>","PeriodicalId":17411,"journal":{"name":"Journal of the experimental analysis of behavior","volume":null,"pages":null},"PeriodicalIF":1.4000,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of the experimental analysis of behavior","FirstCategoryId":"102","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/jeab.910","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"BEHAVIORAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
In this meta-analysis, we describe a benchmark value of delay and probability discounting reliability and stability that might be used to (a) evaluate the meaningfulness of clinically achieved changes in discounting and (b) support the role of discounting as a valid and enduring measure of intertemporal choice. We examined test–retest reliability, stability effect sizes (dz; Cohen, 1992), and relevant moderators across 30 publications comprising 39 independent samples and 262 measures of discounting, identified via a systematic review of PsychInfo, PubMed, and Google Scholar databases. We calculated omnibus effect-size estimates and evaluated the role of proposed moderators using a robust variance estimation meta-regression method. The meta-regression output reflected modest test–retest reliability, r = .670, p < .001, 95% CI [.618, .716]. Discounting was most reliable when measured in the context of temporal constraints, in adult respondents, when using money as a medium, and when reassessed within 1 month. Testing also suggested acceptable stability via nonsignificant and small changes in effect magnitude over time, dz = 0.048, p = .31, 95% CI [−0.051, 0.146]. Clinicians and researchers seeking to measure discounting can consider the contexts when reliability is maximized for specific cases.
期刊介绍:
Journal of the Experimental Analysis of Behavior is primarily for the original publication of experiments relevant to the behavior of individual organisms.