Leo Alexander, Q. Chelsea Song, Louis Hickman, Hyun Joo Shin
{"title":"Sourcing algorithms: Rethinking fairness in hiring in the era of algorithmic recruitment","authors":"Leo Alexander, Q. Chelsea Song, Louis Hickman, Hyun Joo Shin","doi":"10.1111/ijsa.12499","DOIUrl":"https://doi.org/10.1111/ijsa.12499","url":null,"abstract":"Sourcing algorithms are technologies used in online platforms to identify, screen, and inform potential applicants about job openings. The popularity of such technologies is rapidly increasing due to their pervasiveness in online advertising and beliefs that sourcing algorithms can decrease time to hire while improving the quality of new hires. What is little known, however, are their potential risks: sourcing algorithms could (intentionally or unintentionally) encode or exacerbate occupational demographic disparities, thereby hindering organizational diversity and/or decreasing the effectiveness of online hiring practices. Because sourcing algorithms identify and screen potential job applicants <jats:italic>before</jats:italic> they are made aware of employment opportunities, methods for evaluating discrimination in hiring which focus solely on job applicants (e.g., adverse impact ratio), may fail to detect the effects of discriminatory sourcing algorithms. Thus, we propose an expanded model of the employee hiring process to take into account the role of sourcing algorithms. Based on empirical approximations, we conducted a Monte Carlo simulation study to examine the magnitude and nature of sourcing algorithms' influence on hiring outcomes. Our findings suggest that sourcing algorithms could hinder the diversity of new hires while <jats:italic>misleadingly</jats:italic> suggesting positive diversity outcomes in personnel selection. We provide practical guidance for the use of sourcing algorithms and call for a systematic re‐examination of how to evaluate selection system fairness in the era of algorithmic recruitment.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142216178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring the role of cognitive load in faking prevention using the dual task paradigm","authors":"Sabah Rasheed, Chet Robie","doi":"10.1111/ijsa.12497","DOIUrl":"https://doi.org/10.1111/ijsa.12497","url":null,"abstract":"Organizations are increasingly employing personality assessments as part of their selection processes due to their predictive value for job‐related outcomes. However, applicant faking can undermine the validity of such measures. This study explored a novel faking prevention method using the dual task paradigm. Respondents in the dual task conditions memorized a series of five or seven digits while attempting to fake their responses on a personality measure. Their results were compared with a no dual task condition in which respondents were also instructed to fake. Our results revealed that faking performance was limited, and criterion‐related validity was improved in the dual task conditions compared with the no dual task condition. The practical implications and future directions for this initial proof of concept are discussed.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141873053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personality development goals at work: Would a new assessment tool help?","authors":"Wen‐Dong Li, Jing Hu, Jiexin Wang","doi":"10.1111/ijsa.12498","DOIUrl":"https://doi.org/10.1111/ijsa.12498","url":null,"abstract":"We commend the focal article by Dupré and Wille (2024) that introduces personality development goals at work. Yet, to many organizational researchers, this may strike as a bold proposal given its novelty and provocative nature. Seeing the potential of this proposal, we offer discussions on potential theoretical and methodological challenges that researchers, who are eager to advance this line of research, may encounter. We encourage future research to tackle these issues in order to further advance theoretical developments and practical applications of personality development at work.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141783335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reality or illusion: A qualitative study on interviewer job previews and applicant self‐presentation","authors":"Annika Schmitz‐Wilhelmy, D. Truxillo","doi":"10.1111/ijsa.12495","DOIUrl":"https://doi.org/10.1111/ijsa.12495","url":null,"abstract":"Job interviews involve an exchange of information between interviewers and applicants to assess fit from each side. But current frameworks on interviewers' job previews and applicants' self‐presentation do not completely capture these exchange processes. Using a grounded theory approach, we developed a theoretical model that spans both literatures by showing the complex relationships between job previews and self‐presentation in the interview. Our study also introduces a new way of categorizing applicant self‐presentation and reveals why interviewers and applicants choose to use certain strategies. Based on 43 qualitative interviews with applicants and interviewers, we identified five dominant applicant self‐presentation responses to job preview information: Receding from the Application Process, Reciprocating Reality, Exploiting the RJP, Resisting in Defiance, and Reciprocating Illusion. Furthermore, we found that applicants present many versions of themselves that not only include their actual, favorable, and ought self but also their anticipated‐future self. We also identify interviewers' and applicants' conflicting motives for presenting reality and illusion. Our work provides a deeper understanding of job previews and self‐presentation by providing a big‐picture, yet fine‐grained examination of the communication processes from the viewpoint of the applicant and the interviewer, illustrating implications for both parties and proposing new avenues for research.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141649213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayley I. Moore, Patrick D. Dunlop, Djurre Holtrop, Marylène Gagné
{"title":"I can't get no (need) satisfaction: Using a relatedness need‐supportive intervention to improve applicant reactions to asynchronous video interviews","authors":"Hayley I. Moore, Patrick D. Dunlop, Djurre Holtrop, Marylène Gagné","doi":"10.1111/ijsa.12493","DOIUrl":"https://doi.org/10.1111/ijsa.12493","url":null,"abstract":"Some research suggests that job applicants tend to express negative perceptions of asynchronous video interviews (AVIs). Drawing from basic psychological needs theory, we proposed that these negative perceptions arise partly from the lack of human interaction between applicants and the organization during an AVI, which fails to satisfy applicants' need for <jats:italic>relatedness</jats:italic>. Recruiting participants through Prolific, we conducted two experimental studies that aimed to manipulate the level of relatedness support through a relatedness‐need supportive introductory video containing empathetic messaging and humor. Using a vignette approach, participants in study 1 (<jats:italic>N</jats:italic> = 100) evaluated a hypothetical AVI that included one of two introductory videos: relatedness‐supportive versus neutral messaging. The relatedness‐supportive video yielded higher relatedness need satisfaction (<jats:italic>d</jats:italic> = 0.53) and organizational attraction ratings (<jats:italic>d</jats:italic> = 0.49) than the neutral video. In study 2, participants (<jats:italic>N</jats:italic> = 231) completed an AVI that included one of the two videos and evaluated their AVI experience. In contrast to the vignette study, we observed no significant differences between groups for relatedness need satisfaction, organizational attraction, nor other outcomes. Our findings provided little evidence that humor and empathic video messaging improves reactions to an AVI and illustrated the limitations on the external validity of vignette designs.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141614564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment order and faking behavior","authors":"Brett L. Wallace, Gary N. Burns","doi":"10.1111/ijsa.12496","DOIUrl":"https://doi.org/10.1111/ijsa.12496","url":null,"abstract":"Personality testing is a critical component of organizational assessment and selection processes. Despite nearly a century of research recognizing faking as a concern in personality assessment, the impact of order effects on faking has not been thoroughly examined. This study investigates whether the sequence of administering personality and cognitive ability measures affects the extent of faking. Previous research suggests administering personality measures early in the assessment process to mitigate adverse impact; however, models of faking behavior and signaling theory imply that test order could influence faking. In two simulated applicant laboratory studies (Study 1 <jats:italic>N</jats:italic> = 172, Study 2 <jats:italic>N</jats:italic> = 174), participants were randomly assigned to complete personality measures either before or after cognitive ability tests. Results indicate that participants who completed personality assessments first exhibited significantly higher levels of faking compared to those who took cognitive ability tests first. These findings suggest that the order of test administration influences faking, potentially due to the expenditure of cognitive resources during cognitive ability assessments. To enhance the integrity of selection procedures, administrators should consider the sequence of test administration to mitigate faking and improve the accuracy of personality assessments. This study also underscores the need for continued exploration of contextual factors influencing faking behavior. Future research should investigate the mechanisms driving these order effects and develop strategies to reduce faking in personality assessments.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy G. Wingate, Joshua S. Bourdage, Piers Steel
{"title":"Evaluating interview criterion‐related validity for distinct constructs: A meta‐analysis","authors":"Timothy G. Wingate, Joshua S. Bourdage, Piers Steel","doi":"10.1111/ijsa.12494","DOIUrl":"https://doi.org/10.1111/ijsa.12494","url":null,"abstract":"The employment interview is used to assess myriad constructs to inform personnel selection decisions. This article describes the first meta‐analytic review of the criterion‐related validity of interview‐based assessments of specific constructs (i.e., related to task and contextual performance). As such, this study explores the suitability of the interview for predicting specific dimensions of performance, and furthermore, if and how interviews should be designed to inform the assessment of distinct constructs. A comprehensive search process identified <jats:italic>k</jats:italic> = 37 studies comprising <jats:italic>N</jats:italic> = 30,646 participants (<jats:italic>N</jats:italic> = 4449 with the removal of one study). Results suggest that constructs related to task (<jats:italic>ρ</jats:italic> = .30) and contextual (<jats:italic>ρ</jats:italic> = .28) performance are assessed with similar levels of criterion‐related validity. Although interview evaluations of task and contextual performance constructs did not show discriminant validity within the interview itself, interview evaluations were more predictive of the targeted criterion construct than of alternative constructs. We further found evidence that evaluations of contextual performance constructs might particularly benefit from the adoption of more structured interview scoring procedures. However, we expect that new research on interview design factors may find additional moderating effects and we point to critical gaps in our current body of literature on employment interviews. These results illustrate how a construct‐specific approach to interview validity can spur new developments in the modeling, assessment, and selection of specific work performance constructs.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malcolm Sehlström, Jessica K. Ljungberg, Markus B. T. Nyström, Anna‐Sara Claeson
{"title":"Relations of personality factors and suitability ratings to Swedish military pilot education completion","authors":"Malcolm Sehlström, Jessica K. Ljungberg, Markus B. T. Nyström, Anna‐Sara Claeson","doi":"10.1111/ijsa.12492","DOIUrl":"https://doi.org/10.1111/ijsa.12492","url":null,"abstract":"Improved understanding of what it takes to be a pilot is an ongoing effort within aviation. We used an exploratory approach to examine whether there are personality‐related differences in who completes the Swedish military pilot education. Assessment records of 182 applicants, accepted to the education between the years of 2004 and 2020 were studied (Mean age 24, SD 4.2 96% men, 4% women). Discriminant analysis was used to explore which personality traits and suitability ratings might be related to education completion. Analysis included suitability assessments made by senior pilots and by a psychologist, a number of traits assessed by the same psychologist, as well as the Commander Trait Inventory (CTI). The resulting discriminant function was significant (Wilk's Lambda = 0.808, (20) = 32.817, <jats:italic>p</jats:italic> = .035) with a canonical correlation of 0.44. The model was able to classify 74.1% of sample cases correctly. The modeling suggests that senior pilot assessment and psychologist assessment both predict education completion. Also contributing were the traits energy, professional motivation, study forecast and leader potential.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ChatGPT, can you take my job interview? Examining artificial intelligence cheating in the asynchronous video interview","authors":"Damian Canagasuriam, Eden‐Raye Lukacik","doi":"10.1111/ijsa.12491","DOIUrl":"https://doi.org/10.1111/ijsa.12491","url":null,"abstract":"Artificial intelligence (AI) chatbots, such as Chat Generative Pre‐trained Transformer (ChatGPT), may threaten the validity of selection processes. This study provides the first examination of how AI cheating in the asynchronous video interview (AVI) may impact interview performance and applicant reactions. In a preregistered experiment, Prolific respondents (<jats:italic>N</jats:italic> = 245) completed an AVI after being randomly assigned to a non‐ChatGPT, ChatGPT‐Verbatim (read AI‐generated responses word‐for‐word), or ChatGPT‐Personalized condition (provided their résumé/contextual instructions to ChatGPT and modified the AI‐generated responses). The ChatGPT conditions received considerably higher scores on overall performance and content than the non‐ChatGPT condition. However, response delivery ratings did not differ between conditions and the ChatGPT conditions received lower honesty ratings. Both ChatGPT conditions rated the AVI as lower on procedural justice than the non‐ChatGPT condition.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equivalence between direct and indirect measures of psychological capital","authors":"Guido Alessandri, L. Filosa","doi":"10.1111/ijsa.12488","DOIUrl":"https://doi.org/10.1111/ijsa.12488","url":null,"abstract":"Psychological Capital (PsyCap) represents an individual's positive and resourceful state, defined by high levels of self‐efficacy, optimism, hope, and resiliency. Since its inception, extensive research has focused on exploring the factors influencing and outcomes associated with PsyCap within organizational contexts. Consequently, there has been a growing demand for reliable assessment tools to measure PsyCap accurately. The present multi‐study investigation aimed to examine whether the two main measures of Psychological Capital, namely the Psychological Capital Questionnaire and the Implicit‐Psychological Capital Questionnaire, show convergence in measuring the same underlying construct. In Study 1, using data from 327 employees from whom we obtained both self‐ and coworker reports on both explicit and implicit Psychological Capital, we evaluated the degree of convergence between measures using a Multitrait‐Multimethod approach. In Study 2, we used six‐wave longitudinal data from 354 employees, gathered every week for 6 consecutive weeks, to test a series of STARTS models, to decompose the proportions of variance of all the components (i.e., trait, state and error) of both Psychological Capital measures, and to compare their magnitude and similarity. In this second study, we also compared their longitudinal predictive power with respect to important organizational outcomes (i.e., work engagement and emotional exhaustion). All in all, results provided empirical evidence for the high degree of convergence of explicit and implicit measures of Psychological Capital. Implications and potential applications of our findings are discussed.","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":null,"pages":null},"PeriodicalIF":2.2,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141338134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}