{"title":"How a strong measurement validity review can go astray: A look at Higgins et al. (2024) and recommendations for future measurement-focused reviews","authors":"Brett A. Murphy , Judith A. Hall","doi":"10.1016/j.cpr.2024.102506","DOIUrl":null,"url":null,"abstract":"<div><div>Critical reviews of a test's measurement validity are valuable scientific contributions, yet even strong reviews can be undermined by subtle problems in how evidence is compiled and presented to readers. First, if discussions of poor reporting practices by a test's users are interwoven with discussions about validity support for the test itself, readers can be inadvertently misled into impressions of the latter which are improperly conflated with the former. Second, test reviewers should give at least as much careful attention to a test's external validity as to its structural validity; test reviewers who prioritize factor analysis and internal consistency at the expense of discriminant and convergent validity can inadvertently mislead readers into perceptions of a test which are more negative or more positive than is warranted by the evidence overall. In this commentary, we aim to help test evaluators in crafting critical investigations of measurement validity. We use <span><span>Higgins et al.'s (2024)</span></span> review of the Reading the Mind in the Eyes Test (RMET; <span><span>Baron-Cohen et al., 2001</span></span>) as a basis for discussion. We argue that their otherwise impressive review went astray in the two ways described above. After considering both the psychometric evidence that <span><span>Higgins et al. (2024)</span></span> provided and the external validity evidence that they did not provide, we conclude that their recommendations that the RMET should be abandoned, and that most prior research findings based on it should be reassessed or disregarded, are unwarranted.</div></div>","PeriodicalId":48458,"journal":{"name":"Clinical Psychology Review","volume":"114 ","pages":"Article 102506"},"PeriodicalIF":13.7000,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Clinical Psychology Review","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0272735824001272","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, CLINICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Critical reviews of a test's measurement validity are valuable scientific contributions, yet even strong reviews can be undermined by subtle problems in how evidence is compiled and presented to readers. First, if discussions of poor reporting practices by a test's users are interwoven with discussions about validity support for the test itself, readers can be inadvertently misled into impressions of the latter which are improperly conflated with the former. Second, test reviewers should give at least as much careful attention to a test's external validity as to its structural validity; test reviewers who prioritize factor analysis and internal consistency at the expense of discriminant and convergent validity can inadvertently mislead readers into perceptions of a test which are more negative or more positive than is warranted by the evidence overall. In this commentary, we aim to help test evaluators in crafting critical investigations of measurement validity. We use Higgins et al.'s (2024) review of the Reading the Mind in the Eyes Test (RMET; Baron-Cohen et al., 2001) as a basis for discussion. We argue that their otherwise impressive review went astray in the two ways described above. After considering both the psychometric evidence that Higgins et al. (2024) provided and the external validity evidence that they did not provide, we conclude that their recommendations that the RMET should be abandoned, and that most prior research findings based on it should be reassessed or disregarded, are unwarranted.
对测试测量有效性的批判性评论是有价值的科学贡献,然而,即使是强有力的评论也可能被证据如何汇编和呈现给读者的微妙问题所破坏。首先,如果测试用户对不良报告实践的讨论与对测试本身的有效性支持的讨论交织在一起,读者可能会在不经意间被误导,产生对后者的印象,而这种印象与前者不恰当地混淆在一起。第二,测试审稿人至少应该对测试的外部效度和结构效度给予同等的关注;测试审稿人优先考虑因素分析和内部一致性,以牺牲判别效度和收敛效度为代价,可能会无意中误导读者对测试的看法,使其比总体证据更消极或更积极。在这篇评论中,我们的目的是帮助测试评估者在制作测量效度的关键调查。我们使用Higgins等人(2024)对通过眼睛读心测试(RMET;Baron-Cohen et al., 2001)作为讨论的基础。我们认为,他们原本令人印象深刻的评论在上述两方面误入歧途。在考虑了Higgins等人(2024)提供的心理测量证据和他们没有提供的外部效度证据后,我们得出结论,他们关于RMET应该被放弃的建议,以及大多数基于RMET的先前研究结果应该被重新评估或忽略的建议是没有根据的。
期刊介绍:
Clinical Psychology Review serves as a platform for substantial reviews addressing pertinent topics in clinical psychology. Encompassing a spectrum of issues, from psychopathology to behavior therapy, cognition to cognitive therapies, behavioral medicine to community mental health, assessment, and child development, the journal seeks cutting-edge papers that significantly contribute to advancing the science and/or practice of clinical psychology.
While maintaining a primary focus on topics directly related to clinical psychology, the journal occasionally features reviews on psychophysiology, learning therapy, experimental psychopathology, and social psychology, provided they demonstrate a clear connection to research or practice in clinical psychology. Integrative literature reviews and summaries of innovative ongoing clinical research programs find a place within its pages. However, reports on individual research studies and theoretical treatises or clinical guides lacking an empirical base are deemed inappropriate for publication.