{"title":"Why Participant Perceptions of Assessment Center Exercises Matter: Justice, Motivation, Self-Efficacy, and Performance","authors":"Sylvia G. Roch, Kathryn Devon","doi":"10.1111/ijsa.70002","DOIUrl":"https://doi.org/10.1111/ijsa.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>Despite expectations, assessment center (AC) participants' performance ratings often are not strongly correlated over AC exercises. Why is a puzzle? Perhaps one piece of the puzzle is that participants view AC exercises with varying levels of motivation, justice, and self-efficacy, which relate to exercise performance, the topic of the current research. Based on 123 participants completing an AC consisting of six exercises (two leaderless group discussions, oral presentation, written case analysis, personality assessment, and cognitive ability exercise), results showed that motivation, self-efficacy, and procedural justice levels differed among exercises, which generally related to exercise performance. Two interventions designed to improve how participants perceive AC exercises (one focusing on self-efficacy and the other on justice) were not successful. Implications are discussed.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143111799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are Games Always Fun and Fair? A Comparison of Reactions to Different Game-Based Assessments","authors":"Marie Luise Ohlms, Klaus G. Melchers","doi":"10.1111/ijsa.12520","DOIUrl":"https://doi.org/10.1111/ijsa.12520","url":null,"abstract":"<p>Game-based assessment (GBA) has garnered attention in the personnel selection and assessment context owing to its postulated potential to improve applicant reactions. However, GBAs can differ considerably depending on their specific design. Therefore, we sought to determine whether test taker reactions to GBAs vary owing to the different manifestations that GBAs may take on, and to test takers' individual preferences for such assessments. In an experimental study, each of <i>N</i> = 147 participants was shown six different GBAs and asked to rate several applicant reaction variables concerning these assessments. We found that reactions to GBAs were not inherently positive even though GBAs were generally perceived as enjoyable. However, perceptions of fairness and organizational attractiveness varied considerably between GBAs. Participants' age and experience with video games were related to reactions but had less impact than the different GBAs. Our results suggest that a technology-as-designed approach, which considers GBAs as a combination of multiple components (e.g., game elements), is crucial in GBA research to provide generalizable results for theory and practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12520","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing Proctored and Unproctored Cognitive Ability Testing in High-Stakes Personnel Selection","authors":"Tore Nøttestad Norrøne, Morten Nordmo","doi":"10.1111/ijsa.70001","DOIUrl":"https://doi.org/10.1111/ijsa.70001","url":null,"abstract":"<div>\u0000 \u0000 <p>New advances in computerized adaptive testing (CAT) have increased the feasibility of high-stakes unproctored testing of general mental ability (GMA) in personnel selection contexts. This study presents the results from a within-subject investigation of the convergent validity of unproctored tests. Three batteries of cognitive ability tests were administered during personnel selection in the Norwegian Armed Forces. A total of 537 candidates completed two sets of proctored fixed-length GMA tests before and during the selection process. In addition, an at-home unproctored CAT battery of tests was administered before the selection process began. Differences and similarities between the convergent validity of the tests were evaluated. The convergent validity coefficients did not significantly differ between proctored and unproctored batteries, both on observed GMA scores and the latent factor level. The distribution and standardized residuals of test scores comparing proctored-proctored and proctored-unproctored were overall quite similar and showed no evidence of score inflation or deflation in the unproctored tests. The similarities between proctored and unproctored results also extended to the moderately searchable words similarity test. Although some unlikely individual cases were observed, the overall results suggest that the unproctored tests maintained their convergent validity.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Henri T. Maindidze, Jason G. Randall, Michelle P. Martin-Raugh, Katrisha M. Smith
{"title":"A Meta-Analysis of Accent Bias in Employee Interviews: The Effects of Gender and Accent Stereotypes, Interview Modality, and Other Moderating Features","authors":"Henri T. Maindidze, Jason G. Randall, Michelle P. Martin-Raugh, Katrisha M. Smith","doi":"10.1111/ijsa.12519","DOIUrl":"https://doi.org/10.1111/ijsa.12519","url":null,"abstract":"<p>To address concerns of subtle discrimination against stigmatized groups, we meta-analyze the magnitude and moderators of bias against non-standard accents in employment interview evaluations. Results from a multi-level random-effects meta-analysis (unique effects: <i>k</i> = 41, <i>N</i> = 7,596; multi-level effects accounting for dependencies: <i>k</i> = 120, <i>N</i> = 20,873) demonstrate that standard-accented (SA) interviewees are consistently favored over non-standard-accented (NSA) interviewees (<i>d</i> = 0.46). Accent bias is stronger against women compared to men, particularly when evaluator samples are predominantly female, and was strongly predicted by interviewers' stereotypes of NSA interviewees as less competent and, to a lesser extent, as less warm. Accent bias was not significantly impacted by perceptions of comprehensibility, accentedness, accent type, interview modality, study rigor, or job speaking skill requirements.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12519","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Toward Theory-Based Volitional Personality Development Interventions at Work","authors":"Sofie Dupré, Bart Wille","doi":"10.1111/ijsa.70000","DOIUrl":"https://doi.org/10.1111/ijsa.70000","url":null,"abstract":"<div>\u0000 \u0000 <p>In this article, we respond to four commentaries (Li et al., 2024; Hennecke & Ingold, 2025; Perossa & Connelly, 2024; Ones et al., 2024) on our article “Personality development goals at work: A new frontier in personality assessment in organizations.” We start by addressing four overarching considerations from the commentaries, including (a) how to approach PDG assessment, (b) the feasibility of personality development interventions, (c) potential trade-offs involved, and (d) the value of personality development beyond established HR practices. Next, in an attempt to integrate these considerations and stimulate future research in this area, we outline three critical elements of what we believe can be the foundation of theory-based personality development interventions at work. For this purpose, we first posit that personality development at work can be rethought such that the focus shifts from “changing an employee's trait levels” to “expanding that employee's comfort zone across a range of personality states.” Second, to have sustained effects, interventions need to accomplish more than simply “learning new behaviors,” by effectively targeting all layers of personality—behavioral, cognitive, and emotional. Finally, we introduce optimal functioning, encompassing both performance and well-being aspects, as the ultimate criterion for evaluating the success of personality development interventions. We hope these reactions and integrative ideas will inspire future research on personality development goals assessment and personality development interventions in the work context.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143116278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew B. Speer, Angie Y. Delacruz, Takudzwa Chawota, Lauren J. Wegmeyer, Andrew P. Tenbrink, Carter Gibson, Chris Frost
{"title":"Evaluating the Impact of Faking on the Criterion-Related Validity of Personality Assessments","authors":"Andrew B. Speer, Angie Y. Delacruz, Takudzwa Chawota, Lauren J. Wegmeyer, Andrew P. Tenbrink, Carter Gibson, Chris Frost","doi":"10.1111/ijsa.12518","DOIUrl":"https://doi.org/10.1111/ijsa.12518","url":null,"abstract":"<p>Personality assessments are commonly used in hiring, but concerns about faking have raised doubts about their effectiveness. Qualitative reviews show mixed and inconsistent impacts of faking on criterion-related validity. To address this, a series of meta-analyses were conducted using matched samples of honest and motivated respondents (i.e., instructed to fake, applicants). In 80 paired samples, the average difference in validity coefficients between honest and motivated samples across five-factor model traits ranged from 0.05 to 0.08 (largest for conscientiousness and emotional stability), with the validity ratio ranging from 64% to 72%. Validity was attenuated when candidates faked regardless of sample type, trait relevance, or the importance of impression management, though variation existed across criterion types. Both real applicant samples (<i>k</i> = 25) and instructed response conditions (<i>k</i> = 55) showed a reduction in validity across honest and motivated conditions, including when managerial ratings of job performance were the criterion. Thus, faking impacted the validity in operational samples. This suggests that practitioners should be cautious relying upon concurrent validation evidence (for personality inventories) and expect attenuated validity in operational applicant settings, particularly for conscientiousness and emotional stability scales. That said, it is important to highlight that personality assessments generally maintained useful validity even under-motivated conditions.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12518","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143112513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attention To Detail and Cyber Skill: Associated Beyond General Intelligence in Cyber-Soldier Conscripts","authors":"Pär-Anders Albinsson, Patrik Lif","doi":"10.1111/ijsa.12517","DOIUrl":"https://doi.org/10.1111/ijsa.12517","url":null,"abstract":"<p>We explore the potential of <i>attention to detail</i> as a component in the selection of conscripts for the cyber track in the Swedish Armed Forces. To measure attention to detail, we adapted the embedded figures test and administered it to conscripts as part of the extended mustering. We report results from a conscript selection with 97 test participants of which 56 continued to become cyber soldiers, finishing their training the following year. Attention to detail showed little correlation with the cognitive-ability components of the mustering test battery, suggesting that attention to detail is unlikely to be strongly associated with general intelligence for this population. Attention to detail was the only cognitive-ability component of the mustering test battery that showed a significant predictive relationship with practical post-training cyber skill (<i>R</i><sup>2</sup> = 0.10). Therefore, we believe that it could be a useful additional component in the selection process.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12517","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143118449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Is Anybody Watching Me? Effects of Information About Evaluators on Applicants' Use of Impression Management in Asynchronous Video Interviews","authors":"Koralie Orji, Nicolas Roulin, Adrian Bangerter","doi":"10.1111/ijsa.12515","DOIUrl":"https://doi.org/10.1111/ijsa.12515","url":null,"abstract":"<p>Asynchronous video interviews (AVIs) are widely used in hiring, but the lack of social presence (e.g., uncertainty about the identity of evaluators) may hinder effective impression management (IM) for applicants. This study examined whether providing information about evaluators facilitates applicant IM use in AVIs, specifically ingratiation or self-promotion. It also explored the experience involved in applicants' response generation. In a mock AVI, 160 participants were randomly assigned to one of two conditions (with or without information about the evaluator). They reported their thoughts after watching their interview recordings. Providing information about the evaluator enhanced ingratiation but did not affect self-promotion. Qualitative analyses revealed that participants with evaluator information were more likely to reference organizational values and align themselves with the evaluator, whereas those without it concentrated more on demonstrating their job-relevant skills. Participants' reported thoughts and emotions suggested that formulating suitable answers and interacting with a computer represent major concerns.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rahul D. Patel, Deborah M. Powell, Nicolas Roulin, Jeffrey R. Spence
{"title":"Tell Me More! Examining the Benefits of Adding Structured Probing in Asynchronous Video Interviews","authors":"Rahul D. Patel, Deborah M. Powell, Nicolas Roulin, Jeffrey R. Spence","doi":"10.1111/ijsa.12514","DOIUrl":"https://doi.org/10.1111/ijsa.12514","url":null,"abstract":"<p>The personnel selection field has observed the rising use of asynchronous video interviews (AVI). The current study investigates whether follow-up questions (probes) can optimize the applicant experience in AVIs. Across two experimental studies with participants recruited from Prolific, we investigated whether AVIs with probing promote applicant reactions (e.g., the opportunity to perform perceptions) toward the AVI and how probing influences interview behaviors, applicant perceptions, and interview performance ratings. In Study 1, 404 participants were randomly assigned to either an AVI with probing or an AVI without probing. Results indicated that probing directly improved the opportunity to perform perceptions and interview performance ratings. In addition, probing positively impacted honest impression management and motivation to perform indirectly through participants' perceived opportunity to perform. However, mediation analyses suggested that the effect of probing on interview performance ratings was driven by response length. In Study 2 (<i>n</i> = 271), we teased apart the effects of the inherently added response time that probing affords applicants with an additional condition that matched the response time of probes. Relative to Study 1, probing only slightly improved the opportunity to perform perceptions, but the effect of probing on the opportunity to perform perceptions was greater when compared to an AVI with an equivalent response time. In addition, probing positively impacted interview performance ratings, above and beyond their increased response time. Implications, limitations, and directions for future research are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12514","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of Applicants' Use of Generative AI in Personnel Selection: Towards a More Nuanced View?","authors":"Filip Lievens, Patrick D. Dunlop","doi":"10.1111/ijsa.12516","DOIUrl":"https://doi.org/10.1111/ijsa.12516","url":null,"abstract":"<div>\u0000 \u0000 <p>Generative AI (GenAI) has made rapid inroads in assessment, as a growing number of applicants rely on it as a coach in unproctored assessments of various selection procedures. This had led to assertions that applicants' GenAI use undermines key assumptions of the predictive model underlying selection and is thus disruptive for organizations' current unproctored assessments, thereby invoking various strategies of organizations to deter and detect its use. In this provocation article, we present a more nuanced view. To this end, we start by reviewing the recent research related to the effects of applicants' use of GenAI in assessment and discuss the evidence of the potential of applicant GenAI use to disrupt assessment validity. Next, we draw on test coaching frameworks to discuss three scenarios of how applicants' use of GenAI might affect an assessment's mean scores and criterion-related validity. These perspectives highlight that the use of GenAI might not only exert negative consequences but potentially have also positive consequences for both applicants and organizations. It is pivotal to distinguish among these scenarios because they lead to different strategies for organizations to deal with applicant use of GenAI.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}