{"title":"ChatGPT, can you take my job interview? Examining artificial intelligence cheating in the asynchronous video interview","authors":"Damian Canagasuriam, Eden-Raye Lukacik","doi":"10.1111/ijsa.12491","DOIUrl":"10.1111/ijsa.12491","url":null,"abstract":"<p>Artificial intelligence (AI) chatbots, such as Chat Generative Pre-trained Transformer (ChatGPT), may threaten the validity of selection processes. This study provides the first examination of how AI cheating in the asynchronous video interview (AVI) may impact interview performance and applicant reactions. In a preregistered experiment, Prolific respondents (<i>N</i> = 245) completed an AVI after being randomly assigned to a non-ChatGPT, ChatGPT-Verbatim (read AI-generated responses word-for-word), or ChatGPT-Personalized condition (provided their résumé/contextual instructions to ChatGPT and modified the AI-generated responses). The ChatGPT conditions received considerably higher scores on overall performance and content than the non-ChatGPT condition. However, response delivery ratings did not differ between conditions and the ChatGPT conditions received lower honesty ratings. Both ChatGPT conditions rated the AVI as lower on procedural justice than the non-ChatGPT condition.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12491","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equivalence between direct and indirect measures of psychological capital","authors":"Guido Alessandri, Lorenzo Filosa","doi":"10.1111/ijsa.12488","DOIUrl":"10.1111/ijsa.12488","url":null,"abstract":"<p>Psychological Capital (PsyCap) represents an individual's positive and resourceful state, defined by high levels of self-efficacy, optimism, hope, and resiliency. Since its inception, extensive research has focused on exploring the factors influencing and outcomes associated with PsyCap within organizational contexts. Consequently, there has been a growing demand for reliable assessment tools to measure PsyCap accurately. The present multi-study investigation aimed to examine whether the two main measures of Psychological Capital, namely the Psychological Capital Questionnaire and the Implicit-Psychological Capital Questionnaire, show convergence in measuring the same underlying construct. In Study 1, using data from 327 employees from whom we obtained both self- and coworker reports on both explicit and implicit Psychological Capital, we evaluated the degree of convergence between measures using a Multitrait-Multimethod approach. In Study 2, we used six-wave longitudinal data from 354 employees, gathered every week for 6 consecutive weeks, to test a series of STARTS models, to decompose the proportions of variance of all the components (i.e., trait, state and error) of both Psychological Capital measures, and to compare their magnitude and similarity. In this second study, we also compared their longitudinal predictive power with respect to important organizational outcomes (i.e., work engagement and emotional exhaustion). All in all, results provided empirical evidence for the high degree of convergence of explicit and implicit measures of Psychological Capital. Implications and potential applications of our findings are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"594-611"},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141338134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personality development goals at work: A new frontier in personality assessment in organizations","authors":"Sofie Dupré, Bart Wille","doi":"10.1111/ijsa.12490","DOIUrl":"10.1111/ijsa.12490","url":null,"abstract":"<p>There is a long and successful history of personality research in organizational contexts and personality assessments are now widely used in a variety of human resources or talent management interventions. In this tradition, assessment typically involves describing (future) employees' personality profiles, and then using this information to select or adapt work roles to optimally meet employees' traits. Although useful, one limitation of this approach is that it overlooks employees' motivations and abilities to develop themselves in their pursuit of greater person-environment fit. This paper therefore argues for a new type of personality assessment that goes beyond the current descriptive approach. Specifically, we propose assessing employees' Personality Development Goals (PDGs) at work to complement the traditional assessment of “who are you?” with information about “who do you want to be?”. We first briefly summarize the current approach to personality assessment and highlight its limitations. Then, we take stock of the research on PDGs in clinical and personality literatures, and outline the reasons for translating this into organizational applications. We end by describing the key principles that should inform the implementation of PDGs at work and propose a number of future research directions to support and advance this practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141339393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Bobko, Philip L. Roth, Le Huy, In-Sue Oh, Jesus Salgado
{"title":"The need for “Considered Estimation” versus “Conservative Estimation” when ranking or comparing predictors of job performance","authors":"Philip Bobko, Philip L. Roth, Le Huy, In-Sue Oh, Jesus Salgado","doi":"10.1111/ijsa.12489","DOIUrl":"10.1111/ijsa.12489","url":null,"abstract":"<p>A recent attempt to generate an updated ranking for the operational validity of 25 selection procedures, using a process labeled “conservative estimation” (Sackett et al., 2022), is flawed and misleading. When conservative estimation's treatment of range restriction (RR) is used, it is unclear if reported validity differences among predictors reflect (i) true differences, (ii) differential degrees of RR (different <i>u</i> values), (iii) differential correction for RR (no RR correction vs. RR correction), or (iv) some combination of these factors. We demonstrate that this creates bias and introduces confounds when ranking (or comparing) selection procedures. Second, the list of selection procedures being directly compared includes both predictor methods and predictor constructs, in spite of the substantial effect construct saturation has on validity estimates (e.g., Arthur & Villado, 2008). This causes additional confounds that cloud comparative interpretations. Based on these, and other, concerns we outline an alternative, “considered estimation” strategy when comparing predictors of job performance. Basic tenets include using RR corrections in the same manner for all predictors, parsing validities of selection methods by constructs, applying the logic beyond validities (e.g., <i>d</i>s), thoughtful reconsideration of prior meta-analyses, considering sensitivity analyses, and accounting for nonindependence across studies.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141342559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph E. Höllig, Andranik Tumasjan, Filip Lievens
{"title":"What drives employers’ favorability ratings on employer review platforms? The role of symbolic, personal, and emotional content","authors":"Christoph E. Höllig, Andranik Tumasjan, Filip Lievens","doi":"10.1111/ijsa.12478","DOIUrl":"10.1111/ijsa.12478","url":null,"abstract":"<p>Employer review platforms have changed the recruitment landscape by allowing current and former employees to post messages about an employer outside of direct company control. Therefore, they have emerged as an important form of third-party employer branding. However, we know little about how such open-ended comments relate to the key variable in employer reviews: employers’ favorability rating. Therefore, we start by situating this variable among other constructs in the employer branding space. Next, we build theory on how content in the open-ended comments of an employer review relates to the positivity or negativity of the reviews’ favorability rating. We test our hypotheses via a text-mining analysis of approximately half a million employer reviews. The results reveal an intriguing discrepancy. Although instrumental, impersonal, and cognitive content is more prevalent in employer reviews, symbolic, personal, and emotional content dominates employer reviews’ favorability rating. In terms of practical implications, this result shows that merely inspecting the frequency of attributes mentioned in employer review text comments as a basis for changing company policies of employer branding efforts might be misguided. We discuss implications for theory and future research, and provide our dictionary for further scholarly and practical use.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"579-593"},"PeriodicalIF":2.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Liyan Xi, Qingxiong Weng, Jan Corstjens, Xiujuan Wang, Lixin Chen
{"title":"Effects of a constructed response retest strategy on faking, test perceptions, and criterion-related validity of situational judgment tests","authors":"Liyan Xi, Qingxiong Weng, Jan Corstjens, Xiujuan Wang, Lixin Chen","doi":"10.1111/ijsa.12482","DOIUrl":"10.1111/ijsa.12482","url":null,"abstract":"<p>This research proposes a faking-mitigation strategy for situational judgment tests (SJTs), referred to as the constructed response retest (CR-retest). The CR-retest strategy involves presenting SJT items in a constructed response format first, followed by equivalent closed-ended items with the same situation description. Two field experiments (<i>N</i><sub>1</sub> = 733, <i>N</i><sub>2</sub> = 273) were conducted to investigate the effects of this strategy and contrast it with a commonly used pretest warning message. Study 1 revealed that the CR-retest strategy was more effective than the warning message in reducing score inflation and improving criterion-related validity. Study 2 delved deeper by investigating the effects of the CR-retest strategy on applicant reactions in a 2 (with or without CR-retest strategy) × 2 (warning or control message) between-subjects design. The results showed that applicants reported positive fairness perceptions on SJT items with the CR-retest strategy. The CR-retest strategy was effective in reducing faking by evoking threat perceptions, whereas the warning message heightened threat and fear. Combining two strategies further decreased faking without undermining fairness perceptions. Overall, our results indicate that the CR-retest strategy could be a valuable method to mitigate faking in real-life selection settings.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"561-578"},"PeriodicalIF":2.6,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141376022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brent A. Stevenor, Louis Hickman, Michael J. Zickar, Fletcher Wimbush, Weston Beck
{"title":"Validity evidence for personality scores from algorithms trained on low-stakes verbal data and applied to high-stakes interviews","authors":"Brent A. Stevenor, Louis Hickman, Michael J. Zickar, Fletcher Wimbush, Weston Beck","doi":"10.1111/ijsa.12480","DOIUrl":"10.1111/ijsa.12480","url":null,"abstract":"<p>We present multifaceted validity evidence for machine learning models (referred to as automated video interview personality assessments (AVI-PAs) in this research) that were trained on verbal data and interviewer ratings from low-stakes interviews and applied to high-stakes interviews to infer applicant personality. The predictive models used RoBERTa embeddings and binary unigrams as predictors. In Study 1 (<i>N</i> = 107), AVI-PAs more closely reflected interviewer ratings compared to applicant and reference ratings. Also, AVI-PAs and interviewer ratings had similar relations with applicants' interview behaviors, biographical information, and hireability. In Study 2 (<i>N</i> = 25), AVI-PAs had weak-moderate (nonsignificant) relations with subsequent supervisor ratings of job performance. Empirically, the AVI-PAs were most similar to interviewer ratings. AVI-PAs, interviewer ratings, self-reports, and reference-reports all demonstrated weak discriminant validity evidence. LASSO regression provided superior (but still weak) discriminant evidence compared to elastic net regression. Despite using natural language embeddings to operationalize verbal behavior, the AVI-PAs (except emotional stability) exhibited large correlations with interviewee word count. We discuss the implications of these findings for pre-employment personality assessments and effective AVI-PA design.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"544-560"},"PeriodicalIF":2.6,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johannes M. Basch, Nicolas Roulin, Josua Gläsner, Raphael Spengler, Julia Wilhelm
{"title":"How different backgrounds in video interviews can bias evaluations of applicants","authors":"Johannes M. Basch, Nicolas Roulin, Josua Gläsner, Raphael Spengler, Julia Wilhelm","doi":"10.1111/ijsa.12487","DOIUrl":"10.1111/ijsa.12487","url":null,"abstract":"<p>Organizations are increasingly using technology-enabled formats such as asynchronous video interviews (AVIs) to evaluate candidates. However, the personal environment of applicants visible in AVI recordings may introduce additional bias in the evaluation of interview performance. This study extends existing research by examining the influence of cues signaling affiliation with Islam or homosexuality in the background and comparing them with a neutral background using an experimental design and a German sample (<i>N</i> = 222). Results showed that visible signs of religious affiliation with Islam led to lower perceived competence, while perceived warmth and interview performance were unaffected. Visual cues of homosexuality had no effect on perceptions of the applicant. In addition, personal characteristics of the raters, such as their intrinsic religious orientation or their attitudes towards homosexuality influenced applicants’ ratings, so that a non-Muslim religious orientation was negatively associated with evaluations of the Muslim candidate and a negative attitude towards homosexuality was negatively associated with evaluations of the homosexual candidate. This study thus contributes to the literature on AVIs and discrimination against Muslims and members of the 2SLGBTQI+ community in personnel selection contexts.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"535-543"},"PeriodicalIF":2.6,"publicationDate":"2024-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12487","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141188914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neil D. Christiansen, Chet Robie, Ye Ra Jeong, Gary N. Burns, Douglas E. Haaland, Mei-Chuan Kung, Ted B. Kinney
{"title":"Departures from linearity as evidence of applicant distortion on personality tests","authors":"Neil D. Christiansen, Chet Robie, Ye Ra Jeong, Gary N. Burns, Douglas E. Haaland, Mei-Chuan Kung, Ted B. Kinney","doi":"10.1111/ijsa.12481","DOIUrl":"10.1111/ijsa.12481","url":null,"abstract":"<p>Two field studies were conducted to examine how applicant faking impacts the normally linear construct relationships of personality tests using segmented regression and by partitioning samples to evaluate effects on validity across different ranges of test scores. Study 1 investigated validity decay across score ranges of applicants to a state police academy (<i>N</i> = 442). Personality test scores had nonlinear construct relations in the applicant sample, with scores from the top of the distribution being worse predictors of subsequent performance but more strongly related to social desirability scores; this pattern was not found for the partitioned scores of a cognitive test. Study 2 compared the relationship between personality test scores and job performance ratings of applicants (<i>n</i> = 97) to those of incumbents (<i>n</i> = 318) in a customer service job. Departures from linearity were observed in the applicant but not in the incumbent sample. Effects of applicant distortion on the validity of personality tests are especially concerning when validity decay increases toward the top of the distribution of test scores. Observing slope differences across ranges of applicant personality test scores can be an important tool in selection.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"521-534"},"PeriodicalIF":2.6,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12481","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141108557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Improving structured interview acceptance through training","authors":"Steve Baumgartner, Lynn Bartels, Julia Levashina","doi":"10.1111/ijsa.12473","DOIUrl":"10.1111/ijsa.12473","url":null,"abstract":"<p>Despite having predictive validity above other selection methods, structured interviews are not always used. Using the Theory of Planned Behavior as a framework, this study examines the role of interview training in increasing structured interview acceptance (SIA). Based on a survey of 190 practitioners in the fields of Human Resources, I-O Psychology, and other professionals who conduct employment interviews, our results show that not all interviewer training programs are equally effective in increasing SIA. While participation in formal interviewer training is related to SIA, SIA could be influenced more by incorporating certain training components, including <i>training on how to avoid rating errors</i> (<i>r</i> = .21), <i>learning how to evaluate interview answers</i> (<i>r</i> = .19), <i>interview practice/roleplaying</i> (<i>r</i> = .17), <i>training on job analysis</i> (<i>r</i> = .15), <i>legal issues</i> (<i>r</i> = .15), <i>background and purpose of the interview</i> (<i>r</i> = .13), <i>job requirements for the position(s) being filled</i> (<i>r</i> = .13), and <i>a discussion about interview verbal and nonverbal behaviors to avoid</i> (<i>r</i> = .13). Additionally, we found that training components display different relationship with SIA across our two sub-samples. For example, in the MTurk sample (i.e., composed primarily from a managerial population) including <i>job analysis</i>, <i>how to evaluate answers</i>, and <i>how to avoid rating errors</i> correlated significantly with SIA. However, in the non-MTurk sample (i.e., composed primarily from a HR professional population), <i>interview practice/role playing</i>, <i>rapport building</i>, <i>use of a videotaped interview to guide instructions</i>, and <i>how to make decisions from interview data</i> correlated significantly with SIA. This highlights the importance of training needs analysis to better understand the audience before training. We suggest that organizations incorporate the identified components into interviewer training to enhance the structured interviews acceptance and ensure that interviewers are more likely to implement structured interview techniques in practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"512-520"},"PeriodicalIF":2.6,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141118947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}