{"title":"An investigation of the convergent validity and test-retest reliability of three uncertainty preference measures.","authors":"Guangyu Zhu, Yiyun Shou, Michael Smithson","doi":"10.3758/s13428-025-02729-9","DOIUrl":null,"url":null,"abstract":"<p><p>Establishing a reliable and valid measure is crucial for ensuring the accuracy and replicability of research findings on risk and uncertainty preference. However, few studies have assessed the reliability and validity of behavioral measures of uncertainty preference. This study examined the convergent validity and test-retest reliability of three commonly used uncertainty preference measures: forced binary choice, certainty equivalent, and matching probability tasks. Experiments 1 (N = 302) and 2 (N = 366) tested the convergent validity and test-retest reliability of one-off assessment of these measures and found that the three measures did not demonstrate satisfactory convergent validity and test-retest reliability for the one-off assessment. Experiment 3 (N = 311) increased the number of repeats to explore whether repeated measurements could enhance convergent validity and test-retest reliability by addressing the attenuation effect of lack of reliability. The convergent validity between certainty equivalent and matching probability improved in the repeated measurement condition. However, the test-retest reliability of the three measures was still not satisfactory in repeated measurement conditions. These findings highlight the measurement issues in the behavioral measures of uncertainty preferences. The potential causes of this low validity and reliability of behavioral measures of uncertainty preference are discussed.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 8","pages":"221"},"PeriodicalIF":4.6000,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12241273/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-025-02729-9","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Establishing a reliable and valid measure is crucial for ensuring the accuracy and replicability of research findings on risk and uncertainty preference. However, few studies have assessed the reliability and validity of behavioral measures of uncertainty preference. This study examined the convergent validity and test-retest reliability of three commonly used uncertainty preference measures: forced binary choice, certainty equivalent, and matching probability tasks. Experiments 1 (N = 302) and 2 (N = 366) tested the convergent validity and test-retest reliability of one-off assessment of these measures and found that the three measures did not demonstrate satisfactory convergent validity and test-retest reliability for the one-off assessment. Experiment 3 (N = 311) increased the number of repeats to explore whether repeated measurements could enhance convergent validity and test-retest reliability by addressing the attenuation effect of lack of reliability. The convergent validity between certainty equivalent and matching probability improved in the repeated measurement condition. However, the test-retest reliability of the three measures was still not satisfactory in repeated measurement conditions. These findings highlight the measurement issues in the behavioral measures of uncertainty preferences. The potential causes of this low validity and reliability of behavioral measures of uncertainty preference are discussed.
期刊介绍:
Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.