Pia V Ingold, Anna Luca Heimann, Bettina Waller, Simon M Breil, Paul R Sackett
{"title":"What do assessment center ratings reflect? Consistency and heterogeneity in variance composition across multiple samples.","authors":"Pia V Ingold, Anna Luca Heimann, Bettina Waller, Simon M Breil, Paul R Sackett","doi":"10.1037/apl0001318","DOIUrl":null,"url":null,"abstract":"<p><p>The question of what assessment centers' measure has remained a controversial topic in research for decades, with a recent increase in studies that (a) use generalizability theory and (b) acknowledge the effects of aggregating postexercise dimension ratings into higher level assessment center scores. Building on these developments, we used Bayesian generalizability theory and random-effects meta-analyses to examine the variance explained by assessment center components such as assessees, exercises, dimensions, assessors, their interactions, and the interrater reliability of AC ratings in 19 different assessment center samples from various organizations (<i>N</i> = 4,963 assessees with 272,528 observations). This provides the first meta-analytic estimates of these effects, as well as insight into the extent to which findings from previous studies generalize to assessment center samples that differ in measurement design, industry, and purpose, and how heterogeneous these effects are across samples. Results were consistent with previous trends in the ranking of variance explained by key AC components (with assessee main effects and assessee-exercise effects being the largest variance components) and additionally emphasized the relevance of assessee-exercise-dimension effects. In addition, meta-analytic results suggested substantial heterogeneity in all reliable variance components (i.e., assessee main effect, assessee-exercise effect, assessee-dimension effect, and assessee-exercise-dimension effect) and in interrater reliability across assessment center samples. Aggregating AC ratings into higher level scores (i.e., overall AC scores, exercise-level scores, and dimension-level scores) reduced heterogeneity only slightly. Implications of the findings for a multifaceted assessment center functioning are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).</p>","PeriodicalId":15135,"journal":{"name":"Journal of Applied Psychology","volume":" ","pages":""},"PeriodicalIF":6.1000,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Applied Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/apl0001318","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
Abstract
The question of what assessment centers' measure has remained a controversial topic in research for decades, with a recent increase in studies that (a) use generalizability theory and (b) acknowledge the effects of aggregating postexercise dimension ratings into higher level assessment center scores. Building on these developments, we used Bayesian generalizability theory and random-effects meta-analyses to examine the variance explained by assessment center components such as assessees, exercises, dimensions, assessors, their interactions, and the interrater reliability of AC ratings in 19 different assessment center samples from various organizations (N = 4,963 assessees with 272,528 observations). This provides the first meta-analytic estimates of these effects, as well as insight into the extent to which findings from previous studies generalize to assessment center samples that differ in measurement design, industry, and purpose, and how heterogeneous these effects are across samples. Results were consistent with previous trends in the ranking of variance explained by key AC components (with assessee main effects and assessee-exercise effects being the largest variance components) and additionally emphasized the relevance of assessee-exercise-dimension effects. In addition, meta-analytic results suggested substantial heterogeneity in all reliable variance components (i.e., assessee main effect, assessee-exercise effect, assessee-dimension effect, and assessee-exercise-dimension effect) and in interrater reliability across assessment center samples. Aggregating AC ratings into higher level scores (i.e., overall AC scores, exercise-level scores, and dimension-level scores) reduced heterogeneity only slightly. Implications of the findings for a multifaceted assessment center functioning are discussed. (PsycInfo Database Record (c) 2025 APA, all rights reserved).
期刊介绍:
The Journal of Applied Psychology® focuses on publishing original investigations that contribute new knowledge and understanding to fields of applied psychology (excluding clinical and applied experimental or human factors, which are better suited for other APA journals). The journal primarily considers empirical and theoretical investigations that enhance understanding of cognitive, motivational, affective, and behavioral psychological phenomena in work and organizational settings. These phenomena can occur at individual, group, organizational, or cultural levels, and in various work settings such as business, education, training, health, service, government, or military institutions. The journal welcomes submissions from both public and private sector organizations, for-profit or nonprofit. It publishes several types of articles, including:
1.Rigorously conducted empirical investigations that expand conceptual understanding (original investigations or meta-analyses).
2.Theory development articles and integrative conceptual reviews that synthesize literature and generate new theories on psychological phenomena to stimulate novel research.
3.Rigorously conducted qualitative research on phenomena that are challenging to capture with quantitative methods or require inductive theory building.