{"title":"使用多个有效预测因子时观测有效性估计的偏差","authors":"Norman D. Henderson","doi":"10.1080/08959285.2021.1968866","DOIUrl":null,"url":null,"abstract":"ABSTRACT Simulated data, validity reports and a firefighter predictive validation study are used to examine validity bias created by three common selection problems-range restriction, applicant and incumbent attrition, and nonlinearity created by compression of high selection test scores. Top 20% selection samples drawn from an applicant pool with known validity coefficients demonstrate that the sample validity estimates of the three predictors are differentially biased in both magnitude and direction, depending on the selection strategy used. Concurrent validity designs generally favor novel predictors. Corrections for direct range restriction across situations were mostly ineffectual. With proper scaling, corrections for indirect range restriction are accurate, but cross-variable biasing effects can occur when score distributions of the individual predictors differ. Many of the biases found in the simulation results are demonstrated in a firefighter predictive validation study where variations of Pearson-Thorndike range corrected validities and a full information maximum likelihood (FIML), approaches are all compared as validity assessments. With normalized predictors, both Pearson and FIML methods show that a test of general mental ability and physically demanding job tasks predicted firefighter performance throughout the 30-year study, with no evidence of interactions or a leveling of performance at high test scores.","PeriodicalId":47825,"journal":{"name":"Human Performance","volume":"34 1","pages":"385 - 411"},"PeriodicalIF":2.9000,"publicationDate":"2021-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Bias in Observed Validity Estimates When Using Multiple Valid Predictors\",\"authors\":\"Norman D. Henderson\",\"doi\":\"10.1080/08959285.2021.1968866\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Simulated data, validity reports and a firefighter predictive validation study are used to examine validity bias created by three common selection problems-range restriction, applicant and incumbent attrition, and nonlinearity created by compression of high selection test scores. Top 20% selection samples drawn from an applicant pool with known validity coefficients demonstrate that the sample validity estimates of the three predictors are differentially biased in both magnitude and direction, depending on the selection strategy used. Concurrent validity designs generally favor novel predictors. Corrections for direct range restriction across situations were mostly ineffectual. With proper scaling, corrections for indirect range restriction are accurate, but cross-variable biasing effects can occur when score distributions of the individual predictors differ. Many of the biases found in the simulation results are demonstrated in a firefighter predictive validation study where variations of Pearson-Thorndike range corrected validities and a full information maximum likelihood (FIML), approaches are all compared as validity assessments. With normalized predictors, both Pearson and FIML methods show that a test of general mental ability and physically demanding job tasks predicted firefighter performance throughout the 30-year study, with no evidence of interactions or a leveling of performance at high test scores.\",\"PeriodicalId\":47825,\"journal\":{\"name\":\"Human Performance\",\"volume\":\"34 1\",\"pages\":\"385 - 411\"},\"PeriodicalIF\":2.9000,\"publicationDate\":\"2021-08-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Human Performance\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1080/08959285.2021.1968866\",\"RegionNum\":4,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, APPLIED\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Human Performance","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1080/08959285.2021.1968866","RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, APPLIED","Score":null,"Total":0}
Bias in Observed Validity Estimates When Using Multiple Valid Predictors
ABSTRACT Simulated data, validity reports and a firefighter predictive validation study are used to examine validity bias created by three common selection problems-range restriction, applicant and incumbent attrition, and nonlinearity created by compression of high selection test scores. Top 20% selection samples drawn from an applicant pool with known validity coefficients demonstrate that the sample validity estimates of the three predictors are differentially biased in both magnitude and direction, depending on the selection strategy used. Concurrent validity designs generally favor novel predictors. Corrections for direct range restriction across situations were mostly ineffectual. With proper scaling, corrections for indirect range restriction are accurate, but cross-variable biasing effects can occur when score distributions of the individual predictors differ. Many of the biases found in the simulation results are demonstrated in a firefighter predictive validation study where variations of Pearson-Thorndike range corrected validities and a full information maximum likelihood (FIML), approaches are all compared as validity assessments. With normalized predictors, both Pearson and FIML methods show that a test of general mental ability and physically demanding job tasks predicted firefighter performance throughout the 30-year study, with no evidence of interactions or a leveling of performance at high test scores.
期刊介绍:
Human Performance publishes research investigating the nature and role of performance in the workplace and in organizational settings and offers a rich variety of information going beyond the study of traditional job behavior. Dedicated to presenting original research, theory, and measurement methods, the journal investigates individual, team, and firm level performance factors that influence work and organizational effectiveness. Human Performance is a respected forum for behavioral scientists interested in variables that motivate and promote high-level human performance, particularly in organizational and occupational settings. The journal seeks to identify and stimulate relevant research, communication, and theory concerning human capabilities and effectiveness. It serves as a valuable intellectual link between such disciplines as industrial-organizational psychology, individual differences, work physiology, organizational behavior, human resource management, and human factors.