Patrick D. Dunlop, Louis Hickman, Djurre Holtrop, Deborah M. Powell
{"title":"Asynchronous Video Interviews in Recruitment and Selection: Lights, Camera, Action!","authors":"Patrick D. Dunlop, Louis Hickman, Djurre Holtrop, Deborah M. Powell","doi":"10.1111/ijsa.70010","DOIUrl":"https://doi.org/10.1111/ijsa.70010","url":null,"abstract":"","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143690206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lara D. Zibarras, Gloria Castano, Stephen Cuppello
{"title":"Applicant Perceptions of Selection Methods: Replicating and Extending Previous Research","authors":"Lara D. Zibarras, Gloria Castano, Stephen Cuppello","doi":"10.1111/ijsa.70007","DOIUrl":"https://doi.org/10.1111/ijsa.70007","url":null,"abstract":"<p>This paper presents research that both replicates and extends previous findings relating to applicant fairness perceptions of various selection methods. Using a working population (<i>N</i> = 281), applicant perceptions of nine ‘traditional’ selection methods were explored, alongside eight ‘newer’ selection methods, including game-based assessment, online interviews, and situational judgement tests. Findings showed that work sample tests, knowledge tests and interviews in person were rated most positively, whilst asynchronous video interviews, personal contacts and professional social media were rated least positively. Some differences were found based on whether participants had previous experience completing the selection method, the mode of delivery for the selection method, and the country in which the participant worked. In line with previous research, selection methods appeared more acceptable and fairer to applicants when they are job-related, offer candidates the opportunity to demonstrate their skills and abilities and are based on sound scientific research. The results are discussed in terms of theoretical and practical implications and future research.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.70007","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143689708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Bridge Too Far: Signalling Effects of Artificial Intelligence Evaluation of Job Interviews","authors":"Agata Mirowska, Jbid Arsenyan","doi":"10.1111/ijsa.70008","DOIUrl":"https://doi.org/10.1111/ijsa.70008","url":null,"abstract":"<p>Deploying Artificial Intelligence (AI) for job interview evaluations, while a potential signal of high innovativeness, may risk suggesting poor people orientation on the part of the organisation. This study utilizes an experimental methodology to investigate whether AI evaluation (AIE) is interpreted as a positive (high innovativeness) or negative (low people orientation) signal by the job applicant, and whether the ensuing effects on attitudes towards the organisation depend on the type of organization implementing the technology. Results indicate that AIE is interpreted more strongly as a signal of how the organisation treats people rather than of how innovative it is. Additionally, removing humans from the selection process appears to be a ‘bridge too far’, when it comes to technological advances in the selection process.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.70008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143632642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie-Eve Tescari, Adrian Bangerter, Christina Györkös, Charlène Padoan, Sandrine Fasel, Lucile Nicolier, Laurène Hondius, Karen Ohnmacht
{"title":"Investigating Effects of Providing Information and Professional Experience on Production of Stories in Response to Past-Behavior Questions","authors":"Marie-Eve Tescari, Adrian Bangerter, Christina Györkös, Charlène Padoan, Sandrine Fasel, Lucile Nicolier, Laurène Hondius, Karen Ohnmacht","doi":"10.1111/ijsa.70004","DOIUrl":"https://doi.org/10.1111/ijsa.70004","url":null,"abstract":"<div>\u0000 \u0000 <p>Past-behavior questions invite applicants to describe their behavior in a past work-related situation, that is, to tell a story about that situation. However, applicants often fail to produce stories in response to such questions. In two experiments (<i>n</i> = 91 and <i>n</i> = 102), we investigated the effects of providing information about questions and professional experience (2 × 2 between-subjects design) on the production of stories and interview performance. In Experiment 1, providing information and professional experience did not affect story production, but professional experience increased performance. In Experiment 2, we enhanced the manipulation of information, giving more explicit guidance about expected responses and increasing the contrast in professional experience. Experienced participants received better performance ratings than inexperienced ones. Neither providing information nor professional experience affected the production of stories, but both affected performance. Story narrative quality was coded post hoc in both studies. Providing information and professional experience did not affect narrative quality in Experiment 1 but did in Experiment 2. Results add to our understanding of individual differences affecting responses to past-behavior questions and have practical implications for facilitating appropriate responses.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"All Your Base Are Belong to Us: The Urgent Reality of Unproctored Testing in the Age of LLMs","authors":"Louis Hickman","doi":"10.1111/ijsa.70005","DOIUrl":"https://doi.org/10.1111/ijsa.70005","url":null,"abstract":"<p>The release of new generative artificial intelligence (AI) tools, including new large language models (LLMs), continues at a rapid pace. Upon the release of OpenAI's new o1 models, I reconducted Hickman et al.'s (2024) analyses examining how well LLMs perform on a quantitative ability (number series) test. GPT-4 scored below the 20th percentile (compared to thousands of human test takers), but o1 scored at the 95th percentile. In response to these updated findings and Lievens and Dunlop's (2025) article about the effects of LLMs on the validity of pre-employment assessments, I make an urgent call to action for selection and assessment researchers and practitioners. A recent survey suggests that a large proportion of applicants are already using generative AI tools to complete high-stakes assessments, and it seems that no current assessments will be safe for long. Thus, I offer possibilities for the future of testing, detail their benefits and drawbacks, and provide recommendations. These possibilities are: increased use of proctoring, adding strict time limits, using LLM detection software, using think-aloud (or similar) protocols, collecting and analyzing trace data, emphasizing samples over signs, and redesigning assessments to allow LLM use during completion. Several of these possibilities inspire future research to modernize assessment. Future research should seek to improve our understanding of how to design valid assessments that allow LLM use, how to effectively use trace test-taker data, and whether think-aloud protocols can help differentiate experts and novices.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.70005","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valerie Schröder, Anna Luca Heimann, Pia Ingold, Nicolas Roulin, Marianne Schmid Mast, Manuel Bachmann, Martin Kleinmann
{"title":"Social Desirability Tendency in Personality-Based Job Interviews—A Question of Interview Format?","authors":"Valerie Schröder, Anna Luca Heimann, Pia Ingold, Nicolas Roulin, Marianne Schmid Mast, Manuel Bachmann, Martin Kleinmann","doi":"10.1111/ijsa.70006","DOIUrl":"https://doi.org/10.1111/ijsa.70006","url":null,"abstract":"<div>\u0000 \u0000 <p>Today's variety of interview formats raises the question of their interchangeability. For personality interviews, a crucial question is whether different formats are comparably robust against applicants' social desirability tendency (SDT) to ensure an accurate measurement. Using a within-subjects design in a simulated selection setting with 211 participants, this study examined how SDT affects personality scores in a face-to-face, asynchronous video, and written interview—all with similar interview questions designed to measure personality. Relationships between interview scores and SDT were weakest in the face-to-face format and strongest in the written format and differed depending on which personality trait was assessed. The findings highlight the suitedness of different interview formats for measuring personality with important implications for interview design and personality assessment.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 2","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143533256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Schäpers, Franz W. Mönke, Chiara-Maria Frieler, Nicolas Roulin, Johannes Basch
{"title":"Attitudes Toward Cybervetting in Germany: Impact on Organizational Attractiveness Depends on Social Media Platform","authors":"Philipp Schäpers, Franz W. Mönke, Chiara-Maria Frieler, Nicolas Roulin, Johannes Basch","doi":"10.1111/ijsa.70003","DOIUrl":"https://doi.org/10.1111/ijsa.70003","url":null,"abstract":"<p>Cybervetting, assessing social media in personnel selection, is widely used. However, individuals concerned often perceive this practice negatively. We propose that attitudes toward cybervetting may depend on the platform used and the cultural context. Thus, we transfer the attitudes toward cybervetting scale to a context with strict data regulations: Germany. In an online between-subjects experiment with platform users and non-users (<i>N </i>= 100 working professionals and students), we examined attitudes toward cybervetting on different social media platforms (professional: LinkedIn vs. personal: Instagram) and their relationship with organizational attractiveness. We found that German participants viewed cybervetting on professional platforms with more skepticism than American participants. Hierarchical regression analysis revealed higher perceived fairness, lower invasion of privacy, and higher organizational attractiveness when cybervetting was done on professional platforms.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.70003","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143424083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Why Participant Perceptions of Assessment Center Exercises Matter: Justice, Motivation, Self-Efficacy, and Performance","authors":"Sylvia G. Roch, Kathryn Devon","doi":"10.1111/ijsa.70002","DOIUrl":"https://doi.org/10.1111/ijsa.70002","url":null,"abstract":"<div>\u0000 \u0000 <p>Despite expectations, assessment center (AC) participants' performance ratings often are not strongly correlated over AC exercises. Why is a puzzle? Perhaps one piece of the puzzle is that participants view AC exercises with varying levels of motivation, justice, and self-efficacy, which relate to exercise performance, the topic of the current research. Based on 123 participants completing an AC consisting of six exercises (two leaderless group discussions, oral presentation, written case analysis, personality assessment, and cognitive ability exercise), results showed that motivation, self-efficacy, and procedural justice levels differed among exercises, which generally related to exercise performance. Two interventions designed to improve how participants perceive AC exercises (one focusing on self-efficacy and the other on justice) were not successful. Implications are discussed.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143111799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Are Games Always Fun and Fair? A Comparison of Reactions to Different Game-Based Assessments","authors":"Marie Luise Ohlms, Klaus G. Melchers","doi":"10.1111/ijsa.12520","DOIUrl":"https://doi.org/10.1111/ijsa.12520","url":null,"abstract":"<p>Game-based assessment (GBA) has garnered attention in the personnel selection and assessment context owing to its postulated potential to improve applicant reactions. However, GBAs can differ considerably depending on their specific design. Therefore, we sought to determine whether test taker reactions to GBAs vary owing to the different manifestations that GBAs may take on, and to test takers' individual preferences for such assessments. In an experimental study, each of <i>N</i> = 147 participants was shown six different GBAs and asked to rate several applicant reaction variables concerning these assessments. We found that reactions to GBAs were not inherently positive even though GBAs were generally perceived as enjoyable. However, perceptions of fairness and organizational attractiveness varied considerably between GBAs. Participants' age and experience with video games were related to reactions but had less impact than the different GBAs. Our results suggest that a technology-as-designed approach, which considers GBAs as a combination of multiple components (e.g., game elements), is crucial in GBA research to provide generalizable results for theory and practice.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12520","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing Proctored and Unproctored Cognitive Ability Testing in High-Stakes Personnel Selection","authors":"Tore Nøttestad Norrøne, Morten Nordmo","doi":"10.1111/ijsa.70001","DOIUrl":"https://doi.org/10.1111/ijsa.70001","url":null,"abstract":"<div>\u0000 \u0000 <p>New advances in computerized adaptive testing (CAT) have increased the feasibility of high-stakes unproctored testing of general mental ability (GMA) in personnel selection contexts. This study presents the results from a within-subject investigation of the convergent validity of unproctored tests. Three batteries of cognitive ability tests were administered during personnel selection in the Norwegian Armed Forces. A total of 537 candidates completed two sets of proctored fixed-length GMA tests before and during the selection process. In addition, an at-home unproctored CAT battery of tests was administered before the selection process began. Differences and similarities between the convergent validity of the tests were evaluated. The convergent validity coefficients did not significantly differ between proctored and unproctored batteries, both on observed GMA scores and the latent factor level. The distribution and standardized residuals of test scores comparing proctored-proctored and proctored-unproctored were overall quite similar and showed no evidence of score inflation or deflation in the unproctored tests. The similarities between proctored and unproctored results also extended to the moderately searchable words similarity test. Although some unlikely individual cases were observed, the overall results suggest that the unproctored tests maintained their convergent validity.</p></div>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"33 1","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}