{"title":"Prejudiced interactions with large language models (LLMs) reduce trustworthiness and behavioral intentions among members of stigmatized groups","authors":"Zachary W. Petzel, Leanne Sowerby","doi":"10.1016/j.chb.2025.108563","DOIUrl":null,"url":null,"abstract":"<div><div>Users report prejudiced responses generated by large language models (LLMs) like ChatGPT. Across 3 preregistered experiments, members of stigmatized social groups (Black Americans, women) reported higher trustworthiness of LLMs after viewing unbiased interactions with ChatGPT compared to when viewing AI-generated prejudice (i.e., racial or gender disparities in salary). Notably, higher trustworthiness accounted for increased behavioral intentions to use LLMs, but only among stigmatized social groups. Conversely, White Americans were more likely to use LLMs when AI-generated prejudice confirmed implicit racial biases, while men intended to use LLMs when responses matched implicit gender biases. Results suggest reducing AI-generated prejudice may promote trustworthiness of LLMs among members of stigmatized social groups, increasing their intentions to use AI tools. Importantly, addressing AI-generated prejudice could minimize social disparities in adoption of LLMs which might further exacerbate professional and educational disparities. Given expected integration of AI in professional and educational settings, these findings may guide equitable implementation strategies among employees and students, in addition to extending theoretical models of technology acceptance by suggesting additional mechanisms of behavioral intentions to use emerging technologies (e.g., trustworthiness).</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"165 ","pages":"Article 108563"},"PeriodicalIF":9.0000,"publicationDate":"2025-01-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322500010X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0
Abstract
Users report prejudiced responses generated by large language models (LLMs) like ChatGPT. Across 3 preregistered experiments, members of stigmatized social groups (Black Americans, women) reported higher trustworthiness of LLMs after viewing unbiased interactions with ChatGPT compared to when viewing AI-generated prejudice (i.e., racial or gender disparities in salary). Notably, higher trustworthiness accounted for increased behavioral intentions to use LLMs, but only among stigmatized social groups. Conversely, White Americans were more likely to use LLMs when AI-generated prejudice confirmed implicit racial biases, while men intended to use LLMs when responses matched implicit gender biases. Results suggest reducing AI-generated prejudice may promote trustworthiness of LLMs among members of stigmatized social groups, increasing their intentions to use AI tools. Importantly, addressing AI-generated prejudice could minimize social disparities in adoption of LLMs which might further exacerbate professional and educational disparities. Given expected integration of AI in professional and educational settings, these findings may guide equitable implementation strategies among employees and students, in addition to extending theoretical models of technology acceptance by suggesting additional mechanisms of behavioral intentions to use emerging technologies (e.g., trustworthiness).
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.