{"title":"Interpreting validity evidence: It is time to end the horse race","authors":"Kevin Murphy","doi":"10.1017/iop.2023.27","DOIUrl":"https://doi.org/10.1017/iop.2023.27","url":null,"abstract":"For almost 25 years, two conclusions arising from a series of meta-analyses (summarized by Schmidt & Hunter, 1998) have been widely accepted in the field of I–O psychology: (a) that cognitive ability tests showed substantial validity as predictors of job performance, with scores on these tests accounting for over 25% of the variance in performance, and (b) cognitive ability tests were among the best predictors of performance and, taking into account their simplicity and broad applicability, were likely to be the starting point for most selection systems. Sackett, Zhang, Berry, and Lievens (2022) challenged these conclusions, showing how unrealistic corrections for range restriction in meta-analyses had led to substantial overestimates of the validity of most tests and assessments and suggesting that cognitive tests were not among the best predictors of performance. Sackett, Zhang, Berry and Lievens (2023) illustrate many implications important of their analysis for the evaluation of selection tests and or developing selection test batteries. Discussions of the validity of alternative predictors of performance often take on the character of a horse race, in which a great deal of attention is given to determining which is the best predictor. From this perspective, one of the messages of Sackett et al. (2022) might be that cognitive ability has been dethroned as the best predictor, and that structured interviews, job knowledge tests, empirically keyed biodata forms and work sample tests are all better choices. In my view, dethroning cognitive ability tests as the best predictor is one of the least important conclusions of the Sackett et al. (2022) review. Although horse races might be fun, the quest to find the best single predictor of performance is arguably pointless because personnel selection is inherently a multivariate problem, not a univariate one. First, personnel selection is virtually never done based on scores on a single test or assessment. There are certainly scenarios where a low score on a single assessment might lead to a negative selection decision; an applicant for a highly selective college who submits a combined SAT score of 560 (320 in Math and 240 in Evidence-Based Reading and Writing) will almost certainly be rejected. However, real-world selection decisions that are based on any type of systematic assessments will usually be based on multiple assessments (e.g., interviews plus tests, biodata plus interviews). More to the point, the criteria that are used to evaluate the validity and value of selection tests are almost certainly multivariate. That is, although selection tests are often validated against supervisory ratings of job performance, they are not designed or used to predict these ratings, which often show uncertain relationships with actual effectiveness in the workplace (Adler et al., 2016; Murphy et al., 2018). Rather, they are used to help organizations make decisions, and assessing the quality of these decisions o","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46896890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Revisiting predictor–criterion construct congruence: Implications for designing personnel selection systems","authors":"L. Hough, F. Oswald","doi":"10.1017/iop.2023.35","DOIUrl":"https://doi.org/10.1017/iop.2023.35","url":null,"abstract":"Overview In their focal article, Sackett et al. (in press) describe implications of their new meta-analytic estimates of validity of widely used predictors for selection of employees. Contradicting the received wisdom of Schmidt and Hunter (1998), Sackett et al. conclude that predictor methods with content specifically tailored to jobs generally have greater validity for predicting job performance than general measures reflecting psychological constructs (e.g., cognitive abilities, personality traits). They also point out that standard deviations around the mean of their metaanalytic validity estimates are often large, leading to their question “why the variability?” (p. x). They suggest many legitimate contributors. We propose an additional moderator variable of critical importance: predictor-criterion construct congruence, accounting for a great deal of variability in validity coefficients found in meta-analysis. That is, the extent to which what is measured is congruent with what is predicted is an important determinant of the level of validity obtained. Sackett et al. (2022) acknowledge that the strongest predictors in their re-analysis are job-specific measures and that a “closer behavioral match between predictor and criterion” (p. 2062) might contribute to higher validities. Many in our field have also noted the importance of “behavioral consistency” between predictors and criteria relevant to selection, while also arguing for another type of congruence: the relationships between constructs in both the predictor and criterion space (e.g., Bartram, 2005; Campbell et al., 1993; Campbell & Knapp, 2001; Hogan & Holland, 2003; Hough, 1992; Hough & Oswald, 2005; Pulakos et al., 1988; Sackett & Lievens, 2008; Schmitt & Ostroff, 1986). The above reflects an important distinction between two types of congruence: behavior-based congruence and construct-based congruence. When ‘past behavior predicts future behavior’ (as might be possible for jobs requiring past experience and where behavior-oriented employment assessments such as interviews, biodata, and work samples are involved), behavior-based congruence exists. Behavior-based assessments can vary a great deal across jobs but tend to ask about past experiences that are influenced by a complex mix of KSAOs. By contrast, constructbased congruence aligns employment tests of job-relevant KSAOs (e.g., verbal and math skills, conscientiousness) with relevant work criteria, such as technical performance or counterproductive work behavior (e.g., Campbell & Wiernik, 2015). What we are suggesting strongly here is that regardless of the approach to congruence adopted in selection, it is the congruence between predictor and criterion constructs that is a key factor","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43876729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"To correct or not to correct for range restriction, that is the question: Looking back and ahead to move forward","authors":"In-Sue Oh, Jorge Mendoza, H. Le","doi":"10.1017/iop.2023.38","DOIUrl":"https://doi.org/10.1017/iop.2023.38","url":null,"abstract":"Sackett et al. (2023) start their focal article by stating that they identified “previously unnoticed flaws” in range restriction (RR) corrections in most validity generalization (VG) meta-analyses of selection procedures reviewed in their 2022 article. Following this provocative opening statement, they discuss how both researchers and practitioners have handled (and should handle) RR corrections in estimating the operational validity of a selection procedure in both VG metaanalyses (whose input studies are predominantly concurrent studies) and individual validation studies (which serve as input to VG meta-analyses). The purpose of this commentary is twofold. We first provide an essential review of Sackett et al.’s (2022) three propositions serving as the major rationales for their recommendations regarding RR corrections (e.g., no corrections for RR in concurrent validation studies). We then provide our critical analyses of their rationales and recommendations regarding RR corrections to put them in perspective, along with some additional thoughts.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47349958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is it also time to revisit situational specificity?","authors":"J. DeSimone, T. Fezzey","doi":"10.1017/iop.2023.40","DOIUrl":"https://doi.org/10.1017/iop.2023.40","url":null,"abstract":"Sackett et al.’s (2023) focal article asserts that the predictors with the highest criterion-related validity in selection settings are specific to individual jobs and emphasizes the importance of adjusting for range restriction (and attenuation) using study-specific artifact estimates. These positions, along with other recent perspectives on meta-analysis, lead us to reassess the extent to which situational specificity (SS) is worth consideration in organizational selection contexts. In this commentary, we will (a) examine the historical context of both the SS and validity generalization (VG) perspectives, (b) evaluate evidence pertaining to these perspectives, and (c) consider whether it is possible for both perspectives to coexist.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44291526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A response to speculations about concurrent validities in selection: Implications for cognitive ability","authors":"D. Ones, C. Viswesvaran","doi":"10.1017/iop.2023.43","DOIUrl":"https://doi.org/10.1017/iop.2023.43","url":null,"abstract":"Although we have many important areas of agreement with Sackett and colleagues1, we must address two issues that form the backbone of the focal article. First, we explain why range restriction corrections in concurrent validation are appropriate, describing the conceptual basis for range restriction corrections, and highlighting some pertinent technical issues that should elicit skepticism about the focal article’s assertions. Second, we disagree with the assertion that the operational validity of cognitive ability is much lower than previously reported. We conclude with some implications for applied practice.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47757621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the undervaluing of diversity in the validity–diversity tradeoff consideration","authors":"J. Olenick, Ajay V. Somaraju","doi":"10.1017/iop.2023.29","DOIUrl":"https://doi.org/10.1017/iop.2023.29","url":null,"abstract":"Sackett et al. (2023) provide a useful more practice-oriented discussion of Sackett et al. (2022) report that reexamined meta-analytic corrections for a wide variety of selection tools, across common content and process domains. We expand on their discussion of implications regarding the new validity estimates for the classic validity – diversity tradeoff by arguing that the benefits of diversity are still underestimated when assessing this tradeoff. To be fair, this issue is not limited to Sackett et al. ’ s efforts but rather represents a shortcoming of the field at large. Regardless, these limitations mean that if diversity benefits were better understood by the field and properly accounted for in tradeoff estimates, even greater reductions in the usefulness of predictors with high group mean differences would likely be observed. We make three key points. First, we argue that the benefits of group diversity are not included in selection decisions, leading to underestimations of diversity benefits. Second, we elaborate on the central role of interdependence as a condition that maximizes the importance of diversity. Finally, we connect these issues to the long-term implications of assessment decisions containing adverse impact.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45157067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rumors of general mental ability’s demise are the next red herring","authors":"Jeffrey M. Cucina, Theodore L. Hayes","doi":"10.1017/iop.2023.37","DOIUrl":"https://doi.org/10.1017/iop.2023.37","url":null,"abstract":"In this paper we focus on the lowered validity for general mental ability (GMA) tests by presenting: (a) a history of the range restriction correction controversy; (b) a review of validity evidence using various criteria; and (c) multiple paradoxes that arise with a lower GMA validity","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42756865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Structured interviews: moving beyond mean validity…","authors":"Allen I. Huffcutt, S. Murphy","doi":"10.1017/iop.2023.42","DOIUrl":"https://doi.org/10.1017/iop.2023.42","url":null,"abstract":"As interview researchers, we were of course delighted by the focal authors’ finding that structured interviews emerged as the predictor with the highest mean validity in their meta-analysis (Sackett et al., 2023, Table 1). Moreover, they found that structured interviews not only provide strong validity but do so while having significantly lower impact on racial groups than other top predictors such as biodata, knowledge, work samples, assessment centers, and GMA (see their Figure 1). Unfortunately, it also appears that structured interviews have the highest variability in validity (i.e., .42 +/− .24) among top predictors (Sackett et al., 2023; Table 1). Such a level of inconsistency is concerning and warrants closer examination. Given that the vast majority of interview research (including our own) has focused on understanding and improving mean validity as opposed to reducing variability, we advocate for a fundamental shift in focus. Specifically, we call for more research on identifying factors that can induce variability in validity and, subsequently, on finding ways to minimize their influence. Our commentary will highlight several prominent factors that have the potential to contribute significantly to the inconsistency in validity. We group them according to three major components of the interview process: interview format/methodology, applicant cognitive processes, and contextual factors.","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44269716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polyculturalism as a multilevel phenomenon","authors":"Suzette Caleo, Daniel S. Whitman","doi":"10.1017/iop.2023.41","DOIUrl":"https://doi.org/10.1017/iop.2023.41","url":null,"abstract":"","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48251148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Going beyond a validity focus to accommodate megatrends in selection system design","authors":"John W. Jones, M. Cunningham","doi":"10.1017/iop.2023.28","DOIUrl":"https://doi.org/10.1017/iop.2023.28","url":null,"abstract":"Sackett, Zhang, Berry, and Lievens (2023) are to be commended for correcting the validity estimates of widely used predictors, many of which turned out to have less validity than prior studies led us to believe. Yet, we should recognize that psychologists and their clients were misled for many years about the utility of some mainstream assessments and selection system design surely suffered. Although Sackett et al. (2023) offered useful recommendations for researchers, they never really addressed selection system design from a practitioner perspective. This response aims to address that omission, emphasizing a multidimensional approach to design science (Casillas et al., 2019).","PeriodicalId":47771,"journal":{"name":"Industrial and Organizational Psychology-Perspectives on Science and Practice","volume":null,"pages":null},"PeriodicalIF":15.8,"publicationDate":"2023-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43443722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}