Stephanie M. Merritt, Ann Marie Ryan, Cari Gardner, Joshua Liff, Nathan Mondragon
{"title":"Gendered competencies and gender composition: A human versus algorithm evaluator comparison","authors":"Stephanie M. Merritt, Ann Marie Ryan, Cari Gardner, Joshua Liff, Nathan Mondragon","doi":"10.1111/ijsa.12459","DOIUrl":null,"url":null,"abstract":"<p>The rise in AI-based assessments in hiring contexts has led to significant media speculation regarding their role in exacerbating or mitigating employment inequities. In this study, we examined 46,214 ratings from 4947 interviews to ascertain if gender differences in ratings were related to interactions among content (stereotype-relevant competencies), context (occupational gender composition), and rater type (human vs. algorithm). Contrary to the hypothesized effects of smaller gender differences in algorithmic scoring than with human raters, we found that both human and algorithmic ratings of men on agentic competencies were higher than those given to women. Also unexpected, the algorithmic scoring evidenced greater gender differences in communal ratings than humans (with women rated higher than men) and similar differences in non-stereotypic competency ratings that were in the opposite direction (humans rated men higher than women, while algorithms rated women higher than men). In more female-dominated occupations, humans tended to rate applicants as generally less competent overall relative to the algorithms, but algorithms rated men more highly in these occupations. Implications for auditing for group differences in selection contexts are discussed.</p>","PeriodicalId":51465,"journal":{"name":"International Journal of Selection and Assessment","volume":"32 2","pages":"225-248"},"PeriodicalIF":2.6000,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Selection and Assessment","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/ijsa.12459","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0
Abstract
The rise in AI-based assessments in hiring contexts has led to significant media speculation regarding their role in exacerbating or mitigating employment inequities. In this study, we examined 46,214 ratings from 4947 interviews to ascertain if gender differences in ratings were related to interactions among content (stereotype-relevant competencies), context (occupational gender composition), and rater type (human vs. algorithm). Contrary to the hypothesized effects of smaller gender differences in algorithmic scoring than with human raters, we found that both human and algorithmic ratings of men on agentic competencies were higher than those given to women. Also unexpected, the algorithmic scoring evidenced greater gender differences in communal ratings than humans (with women rated higher than men) and similar differences in non-stereotypic competency ratings that were in the opposite direction (humans rated men higher than women, while algorithms rated women higher than men). In more female-dominated occupations, humans tended to rate applicants as generally less competent overall relative to the algorithms, but algorithms rated men more highly in these occupations. Implications for auditing for group differences in selection contexts are discussed.
期刊介绍:
The International Journal of Selection and Assessment publishes original articles related to all aspects of personnel selection, staffing, and assessment in organizations. Using an effective combination of academic research with professional-led best practice, IJSA aims to develop new knowledge and understanding in these important areas of work psychology and contemporary workforce management.