{"title":"“The interviewer is a machine!” Investigating the effects of conventional and technology-mediated interview methods on interviewee reactions and behavior","authors":"Emmanuelle P. Kleinlogel, Marianne Schmid Mast, Dinesh Babu Jayagopi, Kumar Shubham, Anaïs Butera","doi":"10.1111/ijsa.12433","DOIUrl":"10.1111/ijsa.12433","url":null,"abstract":"<p>Despite the growing number of organizations interested in the use of asynchronous video interviews (AVIs), little is known about its impact on interviewee reactions and behavior. We randomly assigned participants (<i>N</i> = 299) from two different countries (Switzerland and India) to a face-to-face interview, an avatar-based video interview (with an avatar as a virtual recruiter), or a text-based video interview (with written questions) and collected data on a set of self-rated and observer-rated criteria. Overall, we found that whereas participants reported more negative reactions towards the two asynchronous interviews, observer ratings revealed similar performance across the three interviews and lower stress levels in the two AVIs. These findings suggest that despite technology-mediated interview methods still not being well-accepted, interviewees are not at a disadvantage when these methods are used in terms of how well interviewees perform and how stressed they appear to external observers. Implications are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 3","pages":"403-419"},"PeriodicalIF":2.2,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12433","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42250329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Selecting entrepreneurial employees using a person-group fit perspective","authors":"Roshni Das","doi":"10.1111/ijsa.12431","DOIUrl":"10.1111/ijsa.12431","url":null,"abstract":"<p>As jobs become unstructured and collective endeavor oriented, it is increasingly being realized that work groups must become more autonomous and entrepreneurial. The selection literature is however silent on the predictive mechanisms that may be leveraged to select group members with the desired competencies. The person-group fit perspective enables us to hypothesize and demonstrate that selection instruments geared toward gauging an individual's fit within a group are likely to manifest entrepreneurial competencies and behaviors in the individual. Further, the nature of the job, in terms of the structuredness of work (task formalization) and the repetitiveness of work activities (task routinization), has a moderating impact on the relationship between overall entrepreneurial competence and selection practices. The study has implications for incorporating the principles of person-group fit into the design of job profiles.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 2","pages":"336-346"},"PeriodicalIF":2.2,"publicationDate":"2023-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45226188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joseph D. Abraham, Dawn D. Lambert, Michael C. Mihalecz, Monica D. Elcott, Hannah S. Asbury, Penelope C. Palmer
{"title":"How similar is similar enough? Job profile similarity benchmarks using occupational information network data","authors":"Joseph D. Abraham, Dawn D. Lambert, Michael C. Mihalecz, Monica D. Elcott, Hannah S. Asbury, Penelope C. Palmer","doi":"10.1111/ijsa.12430","DOIUrl":"10.1111/ijsa.12430","url":null,"abstract":"<p>Job comparison research is critical to many human resources initiatives, such as transporting validity evidence. Job analysis methods often focus on critical attribute (e.g., tasks, work behaviors) overlap when assessing similarity, but profile similarity metrics represent an alternative or complementary approach for job comparisons. This paper utilizes Occupational Information Network (O*NET) data to establish a distribution of job profile correlations across all job pairs for five attributes – generalized work activities, knowledge, skills, abilities, and work styles. These correlations represent effect sizes, or degree of shared variance between jobs. Practitioners may reference these correlational distributions as benchmarks for gauging the practical significance of the observed degree of similarity between two jobs of interest compared to the broader world of work.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 3","pages":"469-476"},"PeriodicalIF":2.2,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42069104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sydney L. Reichin, Danielle M. Tarantino, Rustin D. Meyer
{"title":"Intentional response distortion during the COVID-19 pandemic","authors":"Sydney L. Reichin, Danielle M. Tarantino, Rustin D. Meyer","doi":"10.1111/ijsa.12432","DOIUrl":"10.1111/ijsa.12432","url":null,"abstract":"<p>COVID-19 has abruptly and unexpectedly transformed nearly every aspect of work, including but not limited to increased unemployment rates and uncertainty regarding future job prospects. Response distortion has always been a concern given that many organizations rely on information that is self-reported by applicants regarding their potential employability (e.g., responses to self-reported personality instruments, resumes, interview responses). Drawing from the Valence-Instrumentality-Expectancy (VIE) theory of motivation, we propose that the uncertainty surrounding jobs may lead to amplified distorted responses on these measures in areas where COVID-19 was most salient. In a sample of 213 working adults [~50% female, age <i>M</i> = 38.48], the present study shows that increases in response distortion on a measure of conscientiousness were more pronounced as a function of (a) local COVID positivity rates and (b) job type, such that frontline workers distorted their responses the most. Findings are discussed in the context of VIE theory, personality measurement, and challenges with maintaining effective selection procedures.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 3","pages":"456-468"},"PeriodicalIF":2.2,"publicationDate":"2023-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12432","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49507454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Faking resistance of a quasi-ipsative RIASEC occupational interest measure","authors":"Sabah Rasheed, Chet Robie","doi":"10.1111/ijsa.12427","DOIUrl":"10.1111/ijsa.12427","url":null,"abstract":"<p>Quasi-ipsative (QI) forced-choice response formats are often recommended over single-stimulus (SS) as a method to reduce applicant faking. Across three studies we developed and tested a QI version of the RIASEC occupational interests scale. The first study established acceptable reliability and validity of the QI version. The second and third studies tested the efficacy of the QI version for faking prevention in simulated job applicant scenarios. The results revealed that although the QI and SS formats were similarly fakable for the primary targeted interest, faking was limited for the secondary target on the QI version. Future research should identify the specific contexts in which QI prevents faking on various individual differences measures to allow for accurate recommendations in applied settings.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 2","pages":"321-335"},"PeriodicalIF":2.2,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12427","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41473279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adrian Bangerter, Eric Mayor, Skanda Muralidhar, Emmanuelle P. Kleinlogel, Daniel Gatica-Perez, Marianne Schmid Mast
{"title":"Automatic identification of storytelling responses to past-behavior interview questions via machine learning","authors":"Adrian Bangerter, Eric Mayor, Skanda Muralidhar, Emmanuelle P. Kleinlogel, Daniel Gatica-Perez, Marianne Schmid Mast","doi":"10.1111/ijsa.12428","DOIUrl":"10.1111/ijsa.12428","url":null,"abstract":"<p>Structured interviews often feature past-behavior questions, where applicants are asked to tell a story about past work experience. Applicants often experience difficulties producing such stories. Automatic analyses of applicant behavior in responding to past-behavior questions may constitute a basis for delivering feedback and thus helping them improve their performance. We used machine learning algorithms to predict storytelling in transcribed speech of participants responding to past-behavior questions in a simulated selection interview. Responses were coded as to whether they featured a story or not. For each story, utterances were also manually coded as to whether they described the situation, the task/action performed, or results obtained. The algorithms predicted whether a response features a story or not (best accuracy: 78%), as well as the count of situation, task/action, and response utterances. These findings contribute to better automatic identification of verbal responses to past-behavior questions and may support automatic provision of feedback to applicants about their interview performance.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 3","pages":"376-387"},"PeriodicalIF":2.2,"publicationDate":"2023-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12428","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49404539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Notice","authors":"","doi":"10.1111/ijsa.12426","DOIUrl":"https://doi.org/10.1111/ijsa.12426","url":null,"abstract":"","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 3","pages":"484"},"PeriodicalIF":2.2,"publicationDate":"2023-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"50140507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Job seekers' attitudes toward cybervetting in China: Platform comparisons and relationships with social media posting habits and individual differences","authors":"Nicolas Roulin, Zhixin Liu","doi":"10.1111/ijsa.12424","DOIUrl":"10.1111/ijsa.12424","url":null,"abstract":"<p>Cybervetting, or reviewing applicants' social media profiles, has become a central part of the hiring process for many organizations. Yet, extant cybervetting research is largely limited to Western platforms and samples. The present study examines the three core elements of attitudes toward cybervetting (ATC—perceived justice, privacy invasion, and face validity) using a sample of 200 Chinese job seekers providing their views on three popular platforms in China (WeChat, QQ, and Weibo). Attitudes were negative across all platforms, although slightly more positive for WeChat. ATC were associated with job seekers' social media posting habits (e.g., posting positive content more frequently) and individual differences (i.e., gender and extraversion). Organizations should be mindful that cybervetting might impede the recruitment of talents.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 2","pages":"347-354"},"PeriodicalIF":2.2,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41760150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Cross-national applicability of a game-based cognitive assessment","authors":"Xander van Lill, Laird McColl, Matthew Neale","doi":"10.1111/ijsa.12425","DOIUrl":"10.1111/ijsa.12425","url":null,"abstract":"<p>New technology has had a discernable impact on how organizations recruit and select potential employees. Game-based assessment has emerged as a potential technology that can be used to enhance the assessment of individual differences and applicants' views of the selection process. However, studies investigating the psychometric properties and predictive validity of game-based assessments are still lacking. This study investigated the structural equivalence of a game-based assessment of cognitive ability across 228 Australians and 239 South Africans. A smaller sample of 115 South Africans also received work performance ratings to investigate the predictive validity of the cognitive assessment. Results of factor analysis supported a strong general factor of cognitive ability across the entire sample but only partial metric and scalar invariance across the two nations. The general factor of the game-based assessment further revealed promising results in terms of its predictive validity for five broad dimensions of individual work performance.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 2","pages":"302-320"},"PeriodicalIF":2.2,"publicationDate":"2023-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12425","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45481081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Felix Kares, Cornelius J. König, Richard Bergs, Clea Protzel, Markus Langer
{"title":"Trust in hybrid human-automated decision-support","authors":"Felix Kares, Cornelius J. König, Richard Bergs, Clea Protzel, Markus Langer","doi":"10.1111/ijsa.12423","DOIUrl":"10.1111/ijsa.12423","url":null,"abstract":"<p>Research has examined trust in humans and trust in automated decision support. Although reflecting a likely realization of decision support in high-risk tasks such as personnel selection, trust in hybrid human-automation teams has thus far received limited attention. In two experiments (<i>N</i><sub>1</sub> = 170, <i>N</i><sub>2</sub> = 154) we compare trust, trustworthiness, and trusting behavior for different types of decision-support (automated, human, hybrid) across two assessment contexts (personnel selection, bonus payments). We additionally examined a possible trust violation by presenting one group of participants a preselection that included predominantly male candidates, thus reflecting possible unfair bias. Whereas fully-automated decisions were trusted less, results suggest that trust in hybrid decision support was similar to trust in human-only support. Trust violations were not perceived differently based on the type of support. We discuss theoretical (e.g., trust in hybrid support) and practical implications (e.g., keeping humans in the loop to prevent negative reactions).</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"31 3","pages":"388-402"},"PeriodicalIF":2.2,"publicationDate":"2023-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12423","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46767888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}