{"title":"Reality or illusion: A qualitative study on interviewer job previews and applicant self-presentation","authors":"Annika Schmitz-Wilhelmy, Donald M. Truxillo","doi":"10.1111/ijsa.12495","DOIUrl":"10.1111/ijsa.12495","url":null,"abstract":"<p>Job interviews involve an exchange of information between interviewers and applicants to assess fit from each side. But current frameworks on interviewers' job previews and applicants' self-presentation do not completely capture these exchange processes. Using a grounded theory approach, we developed a theoretical model that spans both literatures by showing the complex relationships between job previews and self-presentation in the interview. Our study also introduces a new way of categorizing applicant self-presentation and reveals why interviewers and applicants choose to use certain strategies. Based on 43 qualitative interviews with applicants and interviewers, we identified five dominant applicant self-presentation responses to job preview information: Receding from the Application Process, Reciprocating Reality, Exploiting the RJP, Resisting in Defiance, and Reciprocating Illusion. Furthermore, we found that applicants present many versions of themselves that not only include their actual, favorable, and ought self but also their anticipated-future self. We also identify interviewers' and applicants' conflicting motives for presenting reality and illusion. Our work provides a deeper understanding of job previews and self-presentation by providing a big-picture, yet fine-grained examination of the communication processes from the viewpoint of the applicant and the interviewer, illustrating implications for both parties and proposing new avenues for research.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12495","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141649213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Assessment order and faking behavior","authors":"Brett L. Wallace, Gary N. Burns","doi":"10.1111/ijsa.12496","DOIUrl":"10.1111/ijsa.12496","url":null,"abstract":"<p>Personality testing is a critical component of organizational assessment and selection processes. Despite nearly a century of research recognizing faking as a concern in personality assessment, the impact of order effects on faking has not been thoroughly examined. This study investigates whether the sequence of administering personality and cognitive ability measures affects the extent of faking. Previous research suggests administering personality measures early in the assessment process to mitigate adverse impact; however, models of faking behavior and signaling theory imply that test order could influence faking. In two simulated applicant laboratory studies (Study 1 <i>N</i> = 172, Study 2 <i>N</i> = 174), participants were randomly assigned to complete personality measures either before or after cognitive ability tests. Results indicate that participants who completed personality assessments first exhibited significantly higher levels of faking compared to those who took cognitive ability tests first. These findings suggest that the order of test administration influences faking, potentially due to the expenditure of cognitive resources during cognitive ability assessments. To enhance the integrity of selection procedures, administrators should consider the sequence of test administration to mitigate faking and improve the accuracy of personality assessments. This study also underscores the need for continued exploration of contextual factors influencing faking behavior. Future research should investigate the mechanisms driving these order effects and develop strategies to reduce faking in personality assessments.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hayley I. Moore, Patrick D. Dunlop, Djurre Holtrop, Marylène Gagné
{"title":"I can't get no (need) satisfaction: Using a relatedness need-supportive intervention to improve applicant reactions to asynchronous video interviews","authors":"Hayley I. Moore, Patrick D. Dunlop, Djurre Holtrop, Marylène Gagné","doi":"10.1111/ijsa.12493","DOIUrl":"10.1111/ijsa.12493","url":null,"abstract":"<p>Some research suggests that job applicants tend to express negative perceptions of asynchronous video interviews (AVIs). Drawing from basic psychological needs theory, we proposed that these negative perceptions arise partly from the lack of human interaction between applicants and the organization during an AVI, which fails to satisfy applicants' need for <i>relatedness</i>. Recruiting participants through Prolific, we conducted two experimental studies that aimed to manipulate the level of relatedness support through a relatedness-need supportive introductory video containing empathetic messaging and humor. Using a vignette approach, participants in study 1 (<i>N</i> = 100) evaluated a hypothetical AVI that included one of two introductory videos: relatedness-supportive versus neutral messaging. The relatedness-supportive video yielded higher relatedness need satisfaction (<i>d</i> = 0.53) and organizational attraction ratings (<i>d</i> = 0.49) than the neutral video. In study 2, participants (<i>N</i> = 231) completed an AVI that included one of the two videos and evaluated their AVI experience. In contrast to the vignette study, we observed no significant differences between groups for relatedness need satisfaction, organizational attraction, nor other outcomes. Our findings provided little evidence that humor and empathic video messaging improves reactions to an AVI and illustrated the limitations on the external validity of vignette designs.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141614564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Timothy G. Wingate, Joshua S. Bourdage, Piers Steel
{"title":"Evaluating interview criterion-related validity for distinct constructs: A meta-analysis","authors":"Timothy G. Wingate, Joshua S. Bourdage, Piers Steel","doi":"10.1111/ijsa.12494","DOIUrl":"10.1111/ijsa.12494","url":null,"abstract":"<p>The employment interview is used to assess myriad constructs to inform personnel selection decisions. This article describes the first meta-analytic review of the criterion-related validity of interview-based assessments of specific constructs (i.e., related to task and contextual performance). As such, this study explores the suitability of the interview for predicting specific dimensions of performance, and furthermore, if and how interviews should be designed to inform the assessment of distinct constructs. A comprehensive search process identified <i>k</i> = 37 studies comprising <i>N</i> = 30,646 participants (<i>N</i> = 4449 with the removal of one study). Results suggest that constructs related to task (<i>ρ</i> = .30) and contextual (<i>ρ</i> = .28) performance are assessed with similar levels of criterion-related validity. Although interview evaluations of task and contextual performance constructs did not show discriminant validity within the interview itself, interview evaluations were more predictive of the targeted criterion construct than of alternative constructs. We further found evidence that evaluations of contextual performance constructs might particularly benefit from the adoption of more structured interview scoring procedures. However, we expect that new research on interview design factors may find additional moderating effects and we point to critical gaps in our current body of literature on employment interviews. These results illustrate how a construct-specific approach to interview validity can spur new developments in the modeling, assessment, and selection of specific work performance constructs.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12494","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141587866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Malcolm Sehlström, Jessica K. Ljungberg, Markus B. T. Nyström, Anna-Sara Claeson
{"title":"Relations of personality factors and suitability ratings to Swedish military pilot education completion","authors":"Malcolm Sehlström, Jessica K. Ljungberg, Markus B. T. Nyström, Anna-Sara Claeson","doi":"10.1111/ijsa.12492","DOIUrl":"10.1111/ijsa.12492","url":null,"abstract":"<p>Improved understanding of what it takes to be a pilot is an ongoing effort within aviation. We used an exploratory approach to examine whether there are personality-related differences in who completes the Swedish military pilot education. Assessment records of 182 applicants, accepted to the education between the years of 2004 and 2020 were studied (Mean age 24, SD 4.2 96% men, 4% women). Discriminant analysis was used to explore which personality traits and suitability ratings might be related to education completion. Analysis included suitability assessments made by senior pilots and by a psychologist, a number of traits assessed by the same psychologist, as well as the Commander Trait Inventory (CTI). The resulting discriminant function was significant (Wilk's Lambda = 0.808, (20) = 32.817, <i>p</i> = .035) with a canonical correlation of 0.44. The model was able to classify 74.1% of sample cases correctly. The modeling suggests that senior pilot assessment and psychologist assessment both predict education completion. Also contributing were the traits energy, professional motivation, study forecast and leader potential.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12492","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ChatGPT, can you take my job interview? Examining artificial intelligence cheating in the asynchronous video interview","authors":"Damian Canagasuriam, Eden-Raye Lukacik","doi":"10.1111/ijsa.12491","DOIUrl":"10.1111/ijsa.12491","url":null,"abstract":"<p>Artificial intelligence (AI) chatbots, such as Chat Generative Pre-trained Transformer (ChatGPT), may threaten the validity of selection processes. This study provides the first examination of how AI cheating in the asynchronous video interview (AVI) may impact interview performance and applicant reactions. In a preregistered experiment, Prolific respondents (<i>N</i> = 245) completed an AVI after being randomly assigned to a non-ChatGPT, ChatGPT-Verbatim (read AI-generated responses word-for-word), or ChatGPT-Personalized condition (provided their résumé/contextual instructions to ChatGPT and modified the AI-generated responses). The ChatGPT conditions received considerably higher scores on overall performance and content than the non-ChatGPT condition. However, response delivery ratings did not differ between conditions and the ChatGPT conditions received lower honesty ratings. Both ChatGPT conditions rated the AVI as lower on procedural justice than the non-ChatGPT condition.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12491","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141505251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Equivalence between direct and indirect measures of psychological capital","authors":"Guido Alessandri, Lorenzo Filosa","doi":"10.1111/ijsa.12488","DOIUrl":"10.1111/ijsa.12488","url":null,"abstract":"<p>Psychological Capital (PsyCap) represents an individual's positive and resourceful state, defined by high levels of self-efficacy, optimism, hope, and resiliency. Since its inception, extensive research has focused on exploring the factors influencing and outcomes associated with PsyCap within organizational contexts. Consequently, there has been a growing demand for reliable assessment tools to measure PsyCap accurately. The present multi-study investigation aimed to examine whether the two main measures of Psychological Capital, namely the Psychological Capital Questionnaire and the Implicit-Psychological Capital Questionnaire, show convergence in measuring the same underlying construct. In Study 1, using data from 327 employees from whom we obtained both self- and coworker reports on both explicit and implicit Psychological Capital, we evaluated the degree of convergence between measures using a Multitrait-Multimethod approach. In Study 2, we used six-wave longitudinal data from 354 employees, gathered every week for 6 consecutive weeks, to test a series of STARTS models, to decompose the proportions of variance of all the components (i.e., trait, state and error) of both Psychological Capital measures, and to compare their magnitude and similarity. In this second study, we also compared their longitudinal predictive power with respect to important organizational outcomes (i.e., work engagement and emotional exhaustion). All in all, results provided empirical evidence for the high degree of convergence of explicit and implicit measures of Psychological Capital. Implications and potential applications of our findings are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"594-611"},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141338134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Personality development goals at work: A new frontier in personality assessment in organizations","authors":"Sofie Dupré, Bart Wille","doi":"10.1111/ijsa.12490","DOIUrl":"10.1111/ijsa.12490","url":null,"abstract":"<p>There is a long and successful history of personality research in organizational contexts and personality assessments are now widely used in a variety of human resources or talent management interventions. In this tradition, assessment typically involves describing (future) employees' personality profiles, and then using this information to select or adapt work roles to optimally meet employees' traits. Although useful, one limitation of this approach is that it overlooks employees' motivations and abilities to develop themselves in their pursuit of greater person-environment fit. This paper therefore argues for a new type of personality assessment that goes beyond the current descriptive approach. Specifically, we propose assessing employees' Personality Development Goals (PDGs) at work to complement the traditional assessment of “who are you?” with information about “who do you want to be?”. We first briefly summarize the current approach to personality assessment and highlight its limitations. Then, we take stock of the research on PDGs in clinical and personality literatures, and outline the reasons for translating this into organizational applications. We end by describing the key principles that should inform the implementation of PDGs at work and propose a number of future research directions to support and advance this practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141339393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philip Bobko, Philip L. Roth, Le Huy, In-Sue Oh, Jesus Salgado
{"title":"The need for “Considered Estimation” versus “Conservative Estimation” when ranking or comparing predictors of job performance","authors":"Philip Bobko, Philip L. Roth, Le Huy, In-Sue Oh, Jesus Salgado","doi":"10.1111/ijsa.12489","DOIUrl":"10.1111/ijsa.12489","url":null,"abstract":"<p>A recent attempt to generate an updated ranking for the operational validity of 25 selection procedures, using a process labeled “conservative estimation” (Sackett et al., 2022), is flawed and misleading. When conservative estimation's treatment of range restriction (RR) is used, it is unclear if reported validity differences among predictors reflect (i) true differences, (ii) differential degrees of RR (different <i>u</i> values), (iii) differential correction for RR (no RR correction vs. RR correction), or (iv) some combination of these factors. We demonstrate that this creates bias and introduces confounds when ranking (or comparing) selection procedures. Second, the list of selection procedures being directly compared includes both predictor methods and predictor constructs, in spite of the substantial effect construct saturation has on validity estimates (e.g., Arthur & Villado, 2008). This causes additional confounds that cloud comparative interpretations. Based on these, and other, concerns we outline an alternative, “considered estimation” strategy when comparing predictors of job performance. Basic tenets include using RR corrections in the same manner for all predictors, parsing validities of selection methods by constructs, applying the logic beyond validities (e.g., <i>d</i>s), thoughtful reconsideration of prior meta-analyses, considering sensitivity analyses, and accounting for nonindependence across studies.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141342559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph E. Höllig, Andranik Tumasjan, Filip Lievens
{"title":"What drives employers’ favorability ratings on employer review platforms? The role of symbolic, personal, and emotional content","authors":"Christoph E. Höllig, Andranik Tumasjan, Filip Lievens","doi":"10.1111/ijsa.12478","DOIUrl":"10.1111/ijsa.12478","url":null,"abstract":"<p>Employer review platforms have changed the recruitment landscape by allowing current and former employees to post messages about an employer outside of direct company control. Therefore, they have emerged as an important form of third-party employer branding. However, we know little about how such open-ended comments relate to the key variable in employer reviews: employers’ favorability rating. Therefore, we start by situating this variable among other constructs in the employer branding space. Next, we build theory on how content in the open-ended comments of an employer review relates to the positivity or negativity of the reviews’ favorability rating. We test our hypotheses via a text-mining analysis of approximately half a million employer reviews. The results reveal an intriguing discrepancy. Although instrumental, impersonal, and cognitive content is more prevalent in employer reviews, symbolic, personal, and emotional content dominates employer reviews’ favorability rating. In terms of practical implications, this result shows that merely inspecting the frequency of attributes mentioned in employer review text comments as a basis for changing company policies of employer branding efforts might be misguided. We discuss implications for theory and future research, and provide our dictionary for further scholarly and practical use.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 4","pages":"579-593"},"PeriodicalIF":2.6,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12478","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141358434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}