{"title":"Evaluating for Evidence of Sociodemographic Bias in Conversational AI for Mental Health Support.","authors":"Yee Hui Yeo,Yuxin Peng,Muskaan Mehra,Jamil Samaan,Joshua Hakimian,Allistair Clark,Karisma Suchak,Zoe Krut,Taiga Andersson,Susan Persky,Omer Liran,Brennan Spiegel","doi":"10.1089/cyber.2024.0199","DOIUrl":null,"url":null,"abstract":"The integration of large language models (LLMs) into healthcare highlights the need to ensure their efficacy while mitigating potential harms, such as the perpetuation of biases. Current evidence on the existence of bias within LLMs remains inconclusive. In this study, we present an approach to investigate the presence of bias within an LLM designed for mental health support. We simulated physician-patient conversations by using a communication loop between an LLM-based conversational agent and digital standardized patients (DSPs) that engaged the agent in dialogue while remaining agnostic to sociodemographic characteristics. In contrast, the conversational agent was made aware of each DSP's characteristics, including age, sex, race/ethnicity, and annual income. The agent's responses were analyzed to discern potential systematic biases using the Linguistic Inquiry and Word Count tool. Multivariate regression analysis, trend analysis, and group-based trajectory models were used to quantify potential biases. Among 449 conversations, there was no evidence of bias in both descriptive assessments and multivariable linear regression analyses. Moreover, when evaluating changes in mean tone scores throughout a dialogue, the conversational agent exhibited a capacity to show understanding of the DSPs' chief complaints and to elevate the tone scores of the DSPs throughout conversations. This finding did not vary by any sociodemographic characteristics of the DSP. Using an objective methodology, our study did not uncover significant evidence of bias within an LLM-enabled mental health conversational agent. These findings offer a complementary approach to examining bias in LLM-based conversational agents for mental health support.","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"3 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyberpsychology, behavior and social networking","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1089/cyber.2024.0199","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, SOCIAL","Score":null,"Total":0}
引用次数: 0
Abstract
The integration of large language models (LLMs) into healthcare highlights the need to ensure their efficacy while mitigating potential harms, such as the perpetuation of biases. Current evidence on the existence of bias within LLMs remains inconclusive. In this study, we present an approach to investigate the presence of bias within an LLM designed for mental health support. We simulated physician-patient conversations by using a communication loop between an LLM-based conversational agent and digital standardized patients (DSPs) that engaged the agent in dialogue while remaining agnostic to sociodemographic characteristics. In contrast, the conversational agent was made aware of each DSP's characteristics, including age, sex, race/ethnicity, and annual income. The agent's responses were analyzed to discern potential systematic biases using the Linguistic Inquiry and Word Count tool. Multivariate regression analysis, trend analysis, and group-based trajectory models were used to quantify potential biases. Among 449 conversations, there was no evidence of bias in both descriptive assessments and multivariable linear regression analyses. Moreover, when evaluating changes in mean tone scores throughout a dialogue, the conversational agent exhibited a capacity to show understanding of the DSPs' chief complaints and to elevate the tone scores of the DSPs throughout conversations. This finding did not vary by any sociodemographic characteristics of the DSP. Using an objective methodology, our study did not uncover significant evidence of bias within an LLM-enabled mental health conversational agent. These findings offer a complementary approach to examining bias in LLM-based conversational agents for mental health support.
期刊介绍:
Cyberpsychology, Behavior, and Social Networking is a leading peer-reviewed journal that is recognized for its authoritative research on the social, behavioral, and psychological impacts of contemporary social networking practices. The journal covers a wide range of platforms, including Twitter, Facebook, internet gaming, and e-commerce, and examines how these digital environments shape human interaction and societal norms.
For over two decades, this journal has been a pioneering voice in the exploration of social networking and virtual reality, establishing itself as an indispensable resource for professionals and academics in the field. It is particularly celebrated for its swift dissemination of findings through rapid communication articles, alongside comprehensive, in-depth studies that delve into the multifaceted effects of interactive technologies on both individual behavior and broader societal trends.
The journal's scope encompasses the full spectrum of impacts—highlighting not only the potential benefits but also the challenges that arise as a result of these technologies. By providing a platform for rigorous research and critical discussions, it fosters a deeper understanding of the complex interplay between technology and human behavior.