{"title":"The role of intent to harm in workplace aggression","authors":"Oren R. Shewach, Paul R. Sackett","doi":"10.1111/ijsa.12468","DOIUrl":"10.1111/ijsa.12468","url":null,"abstract":"<p>Conceptualizations of workplace aggression converge in treating intent to harm others as a necessary feature of aggression. However, inspection of workplace aggression scales suggests that many items do not specify intent to harm. In a series of three studies, we examined the effect of inclusion of intent to harm on workplace aggression's psychometric properties. Study 1 found that existing workplace aggression scales do not consistently specify or imply intent to harm. Study 2 found that inclusion of intent to harm has substantial implications for aggression's occurrence rate. Prior research that does not assess intent to harm overestimates the frequency of aggression. Study 3A found that workplace aggression's correlations with external variables were also overestimated when failing to include intent to harm. We found that aggression measured without specifying intent is highly correlated with counterproductive work behavior (CWB), whereas aggression measured with intent specified is empirically distinguished from CWB. In Study 3A, a construct-valid workplace aggression scale was created, called the Intentional Workplace Aggression Scale (IWAS). Study 3B showed that the IWAS displayed relationships with affective constructs, such as trait anger and emotional stability, as well as with situational variables, such as job satisfaction and organizational justice perceptions.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 3","pages":"361-380"},"PeriodicalIF":2.6,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140381257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bias in job analysis survey ratings attributed to order effects","authors":"Rebecca Berenbon, Bridget McHugh, Philomina Abena Anyidoho","doi":"10.1111/ijsa.12469","DOIUrl":"10.1111/ijsa.12469","url":null,"abstract":"<p>When respondents complete long surveys, evidence suggests that they may change their response behavior as they become fatigued. The present study examines question order effects in a block-randomized job analysis survey that collected task ratings (importance, frequency, and needed at certification) for a certification program. Analyses showed that average task ratings were lower for tasks presented later in the survey for both complete and partial responses, though the effect of question ordering was stronger for non-completers in two rating categories (importance and needed at certification). Additionally, rater variance was lower for questions that appeared later in the survey. The results highlight the value of using randomization to mitigate order effects on survey response quality.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 3","pages":"343-360"},"PeriodicalIF":2.6,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12469","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140172057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Faking good and bad on self-reports versus informant-reports of Dark Triad personality","authors":"Sarah A. Walker, Carolyn MacCann","doi":"10.1111/ijsa.12465","DOIUrl":"10.1111/ijsa.12465","url":null,"abstract":"<p>Research consistently demonstrates that people can distort their responses on self-report personality tests. Informant-reports (where a knowledgeable informant rates a target's personality) can be used as an alternative to self-ratings. However, there has been little research on the extent to which informants can distort their responses on personality tests (or their motives for response distortion). The current study examines the effects of experimentally induced response distortion on self- and informant-reports of the Dark Triad. The participants (<i>N</i> = 834 undergraduates) completed Dark Triad measures in a 2 × 3 between-person design crossing format (self- vs. informant-report [imagined friend]) with instruction condition (answer honestly, look good, or look bad). “Look good” effects were significant for both self-reports (<i>d</i> = −1.22 to 1.42) and informant-reports (<i>d</i> = −1.35 to 0.62). “Look bad” effects were also significant for both self-reports (<i>d</i> = −0.56 to 3.58) and informant-reports (<i>d</i> = −0.55 to 3.70). The Five Factor Machiavellianism Inventory results were opposite to hypotheses, but Dirty Dozen Machiavellianism results were as expected. We conclude that people can distort Dark Triad scores for themselves (self-report) and on behalf of someone else (informant-report). We discuss the relevance of our findings for self- and informant-report assessment in applied contexts.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 3","pages":"329-342"},"PeriodicalIF":2.6,"publicationDate":"2024-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12465","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140075958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Paradoxical leadership on firm performance: What role can guanxi HRD practices play?","authors":"Yi-Ying Chang, Qilin Hu, Mathew Hughes, Taiwei Chang, Che-Yuan Chang","doi":"10.1111/ijsa.12464","DOIUrl":"10.1111/ijsa.12464","url":null,"abstract":"<p>Research shows that paradoxical leadership has a strong positive but inconsistent relationship with firm performance. Drawing on leadership contingency theory, we provide a theoretical model explaining how business unit level (BU-level) paradoxical leadership positively impacts BU-level performance mediated by BU-level corporate entrepreneurship (CE). However, we also show that this relationship can turn negative when the degree of firm-level Guanxi on human resource development (Guanxi HRD) practices is high. By relying on the responses from 276 BUs and performance archival data from the Taiwan Economic Journal database, we found that BU-level CE mediates the relationship between BU-level paradoxical leadership and BU-level performance. Firm-level Guanxi HRD practices diminish this effect and turn the positive relationship between BU-level paradoxical leadership and BU-level performance through BU-level CE negative. Our study reveals the dark side of firm-level Guanxi HRD practices and provides new theoretical and empirical insights that reconcile the relationship between paradoxical leadership and firm performance.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"309-327"},"PeriodicalIF":2.2,"publicationDate":"2024-02-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140417007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yavuz Akbulut, İrem E. Yildirim Şen, Yusuf L. Şahin
{"title":"Employer and employee perceptions of cybervetting as a selection method","authors":"Yavuz Akbulut, İrem E. Yildirim Şen, Yusuf L. Şahin","doi":"10.1111/ijsa.12463","DOIUrl":"10.1111/ijsa.12463","url":null,"abstract":"<p>Cybervetting—reviewing candidates' online profiles—is a relatively new technique used in the personnel selection processes, but empirical studies have largely been conducted with western samples. In Study 1, we interviewed 20 employers from different sectors in a metropolitan city in Turkey and examined the characteristics and implications of the construct. We summarized the reasons given by participants who avoided cybervetting (i.e., irrelevance, validity, invasion of privacy, prejudice) and those who frequently relied on cybervetting (i.e., necessity, validity, consistency checking) and illustrated perceived positive and negative online posts. In Study 2, we collected data from 316 employees in the same city to examine their perceptions of cybervetting and its relationship to organizational justice constructs. In addition to confirming international scales of cybervetting and organizational justice, we found that the face validity of cybervetting and organizational justice constructs were significantly correlated. While some differences were observed between employers and managers, no gender differences were found. The implications of both studies were presented in line with the available literature.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"292-308"},"PeriodicalIF":2.2,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139753663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fostering metacognitive activities during job search: The Three Good Job Search Things intervention","authors":"Jolien Stremersch, Greet Van Hoye","doi":"10.1111/ijsa.12462","DOIUrl":"10.1111/ijsa.12462","url":null,"abstract":"<p>Based on principles from positive psychology and broaden-and-build theory, we examined whether conducting a Three Good Job Search Things intervention can increase metacognitive activities among job seekers. We further theorize that positive affect (i.e., activating and deactivating) serves as a mediating mechanism for the effect of the reflection intervention on metacognitive activities. We designed a randomized control group pretest–posttest experimental study with three groups (i.e., Three Good Job Search Things, active control, and passive control) and two measurement times (<i>N</i> = 116). Results show that the Three Good Job Search Things intervention increased job seekers' metacognitive activities as compared with the passive control group, but not as compared to the active control group. However, the active control group results did not differ significantly from the passive control group. The effects of the intervention on job seekers' metacognitive activities were not mediated by enhanced positive affect. This study contributes to the job search literature by expanding our understanding of how metacognition during job search can be improved by reflecting upon one's job search process.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"279-291"},"PeriodicalIF":2.2,"publicationDate":"2024-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139578371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin Moon, Stephanie J. Law, Joshua S. Bourdage, Nicolas Roulin, Klaus G. Melchers
{"title":"Exploring the role of interviewee cognitive capacities on impression management in face-to-face and virtual interviews","authors":"Benjamin Moon, Stephanie J. Law, Joshua S. Bourdage, Nicolas Roulin, Klaus G. Melchers","doi":"10.1111/ijsa.12460","DOIUrl":"10.1111/ijsa.12460","url":null,"abstract":"<p>Interviewees' use of impression management (IM) in job interviews is clearly related to individual differences such as personality. However, research has paid less attention to how interviewee cognitive capacities (i.e., cognitive ability and executive functions) influence IM use, even though interviewees’ cognitive capacities and IM are theoretically linked. The current research aimed to address this research gap through two studies. In Study 1, 166 undergraduate business students participated in mock face-to-face interviews with real recruiters. In Study 2, 294 job-seeking participants recruited through Prolific completed a mock asynchronous video interview. Overall, cognitive ability was negatively related to deceptive IM while perceived incongruency (i.e., a gap between desired and perceived current impressions conveyed to others) was positively related to deceptive IM in both studies. Furthermore, cognitive ability and working memory updating, but not inhibition and shifting nor incongruency, were negatively related to honest IM in Study 2. Additionally, in both studies the relations between personality traits and interview IM were generally in line with findings from prior research. Overall, our findings provide a more comprehensive understanding of how interview IM relates to interviewee individual differences and interview performance in different forms of job interviews.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"261-278"},"PeriodicalIF":2.2,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Train driver selection: The impact of cognitive ability on train driving performance","authors":"Michael D. Collins","doi":"10.1111/ijsa.12461","DOIUrl":"10.1111/ijsa.12461","url":null,"abstract":"<p>Train driving is a demanding form of human performance where inattention or distraction can lead to serious errors and accidents. Train drivers, therefore, require a unique set of abilities to deal with these demands, especially when exposed to competing or conflicting performance expectations (e.g., on-time performance and following safety rules). Cognitive abilities, in particular, are considered essential to safe and effective train driving. Selective attention is one such ability, however, other cognitive abilities can be equally important. Drawing on self-control theory, this article examines the combined effect of selective attention, fluid intelligence, and verbal reasoning on train driving performance. The results of a study involving 101 experienced train drivers indicates that drivers with low selective attention, low fluid intelligence, and low verbal reasoning perform worse on a train simulator driving assessment than drivers who are higher in these cognitive abilities. The results from this study provide direction for future rail safety research and guidance for practitioners responsible for assessing and selecting train drivers.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"249-260"},"PeriodicalIF":2.2,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12461","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stephanie M. Merritt, Ann Marie Ryan, Cari Gardner, Joshua Liff, Nathan Mondragon
{"title":"Gendered competencies and gender composition: A human versus algorithm evaluator comparison","authors":"Stephanie M. Merritt, Ann Marie Ryan, Cari Gardner, Joshua Liff, Nathan Mondragon","doi":"10.1111/ijsa.12459","DOIUrl":"10.1111/ijsa.12459","url":null,"abstract":"<p>The rise in AI-based assessments in hiring contexts has led to significant media speculation regarding their role in exacerbating or mitigating employment inequities. In this study, we examined 46,214 ratings from 4947 interviews to ascertain if gender differences in ratings were related to interactions among content (stereotype-relevant competencies), context (occupational gender composition), and rater type (human vs. algorithm). Contrary to the hypothesized effects of smaller gender differences in algorithmic scoring than with human raters, we found that both human and algorithmic ratings of men on agentic competencies were higher than those given to women. Also unexpected, the algorithmic scoring evidenced greater gender differences in communal ratings than humans (with women rated higher than men) and similar differences in non-stereotypic competency ratings that were in the opposite direction (humans rated men higher than women, while algorithms rated women higher than men). In more female-dominated occupations, humans tended to rate applicants as generally less competent overall relative to the algorithms, but algorithms rated men more highly in these occupations. Implications for auditing for group differences in selection contexts are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"225-248"},"PeriodicalIF":2.2,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nomi Reznik, Stefan Krumm, Jan-Philipp Freudenstein, Anna L. Heimann, Pia Ingold, Philipp Schäpers, Martin Kleinmann
{"title":"Does understanding what a test measures make a difference? On the relevance of the ability to identify criteria for situational judgment test performance","authors":"Nomi Reznik, Stefan Krumm, Jan-Philipp Freudenstein, Anna L. Heimann, Pia Ingold, Philipp Schäpers, Martin Kleinmann","doi":"10.1111/ijsa.12458","DOIUrl":"10.1111/ijsa.12458","url":null,"abstract":"<p>Situational judgment tests (SJTs) are low-fidelity simulations that are often used in personnel selection. Previous research has provided evidence that the ability to identify criteria (ATIC)—individuals' capability to detect underlying constructs in nontransparent personnel selection procedures—is relevant in simulations in personnel selection, such as assessment centers and situational interviews. Building on recent theorizing about response processes in SJTs as well as on previous empirical results, we posit that ATIC predicts SJT performance. We tested this hypothesis across two preregistered studies. In Study 1, a between-subjects planned-missingness design (<i>N</i> = 391 panelists) was employed and 55 selected items from five different SJTs were administered. Mixed-effects-modeling revealed a small effect for ATIC in predicting SJT responses. Results were replicated in Study 2 (<i>N</i> = 491 panelists), in which a complete teamwork SJT was administered with a high- or a low-stakes instruction and showed either no or a small correlation with ATIC, respectively. We compare these findings with other studies, discuss implications for our understanding of response processes in SJTs, and derive avenues for future research.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"210-224"},"PeriodicalIF":2.2,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/ijsa.12458","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138524864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}