Deborah L. Whetzel, Paul F. Rotenberry, Michael A. McDaniel
{"title":"In‐Basket Validity: A Systematic Review","authors":"Deborah L. Whetzel, Paul F. Rotenberry, Michael A. McDaniel","doi":"10.1111/ijsa.12057","DOIUrl":null,"url":null,"abstract":"In‐baskets are high‐fidelity simulations often used to predict performance in a variety of jobs including law enforcement, clerical, and managerial occupations. They measure constructs not typically assessed by other simulations (e.g., administrative and managerial skills, and procedural and declarative job knowledge). We compiled the largest known database (k = 31; N = 3,958) to address the criterion‐related validity of in‐baskets and possible moderators. Moderators included features of the in‐basket: content (generic vs. job specific) and scoring approach (objective vs. subjective) and features of the validity studies: design (concurrent vs. predictive) and source (published vs. unpublished). Sensitivity analyses assessed how robust the results were to the influence of various biases. Results showed that the operational criterion‐related validity of in‐baskets was sufficiently high to justify their use in high‐stakes settings. Moderator analyses provided useful guidance for developers and users regarding content and scoring.","PeriodicalId":259932,"journal":{"name":"Wiley-Blackwell: International Journal of Selection & Assessment","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Wiley-Blackwell: International Journal of Selection & Assessment","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1111/ijsa.12057","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 13
Abstract
In‐baskets are high‐fidelity simulations often used to predict performance in a variety of jobs including law enforcement, clerical, and managerial occupations. They measure constructs not typically assessed by other simulations (e.g., administrative and managerial skills, and procedural and declarative job knowledge). We compiled the largest known database (k = 31; N = 3,958) to address the criterion‐related validity of in‐baskets and possible moderators. Moderators included features of the in‐basket: content (generic vs. job specific) and scoring approach (objective vs. subjective) and features of the validity studies: design (concurrent vs. predictive) and source (published vs. unpublished). Sensitivity analyses assessed how robust the results were to the influence of various biases. Results showed that the operational criterion‐related validity of in‐baskets was sufficiently high to justify their use in high‐stakes settings. Moderator analyses provided useful guidance for developers and users regarding content and scoring.