评估用于心理健康支持的人工智能对话中的社会人口偏见证据。

IF 4.2 2区 心理学 Q1 PSYCHOLOGY, SOCIAL
Yee Hui Yeo,Yuxin Peng,Muskaan Mehra,Jamil Samaan,Joshua Hakimian,Allistair Clark,Karisma Suchak,Zoe Krut,Taiga Andersson,Susan Persky,Omer Liran,Brennan Spiegel
{"title":"评估用于心理健康支持的人工智能对话中的社会人口偏见证据。","authors":"Yee Hui Yeo,Yuxin Peng,Muskaan Mehra,Jamil Samaan,Joshua Hakimian,Allistair Clark,Karisma Suchak,Zoe Krut,Taiga Andersson,Susan Persky,Omer Liran,Brennan Spiegel","doi":"10.1089/cyber.2024.0199","DOIUrl":null,"url":null,"abstract":"The integration of large language models (LLMs) into healthcare highlights the need to ensure their efficacy while mitigating potential harms, such as the perpetuation of biases. Current evidence on the existence of bias within LLMs remains inconclusive. In this study, we present an approach to investigate the presence of bias within an LLM designed for mental health support. We simulated physician-patient conversations by using a communication loop between an LLM-based conversational agent and digital standardized patients (DSPs) that engaged the agent in dialogue while remaining agnostic to sociodemographic characteristics. In contrast, the conversational agent was made aware of each DSP's characteristics, including age, sex, race/ethnicity, and annual income. The agent's responses were analyzed to discern potential systematic biases using the Linguistic Inquiry and Word Count tool. Multivariate regression analysis, trend analysis, and group-based trajectory models were used to quantify potential biases. Among 449 conversations, there was no evidence of bias in both descriptive assessments and multivariable linear regression analyses. Moreover, when evaluating changes in mean tone scores throughout a dialogue, the conversational agent exhibited a capacity to show understanding of the DSPs' chief complaints and to elevate the tone scores of the DSPs throughout conversations. This finding did not vary by any sociodemographic characteristics of the DSP. Using an objective methodology, our study did not uncover significant evidence of bias within an LLM-enabled mental health conversational agent. These findings offer a complementary approach to examining bias in LLM-based conversational agents for mental health support.","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"3 1","pages":""},"PeriodicalIF":4.2000,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evaluating for Evidence of Sociodemographic Bias in Conversational AI for Mental Health Support.\",\"authors\":\"Yee Hui Yeo,Yuxin Peng,Muskaan Mehra,Jamil Samaan,Joshua Hakimian,Allistair Clark,Karisma Suchak,Zoe Krut,Taiga Andersson,Susan Persky,Omer Liran,Brennan Spiegel\",\"doi\":\"10.1089/cyber.2024.0199\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The integration of large language models (LLMs) into healthcare highlights the need to ensure their efficacy while mitigating potential harms, such as the perpetuation of biases. Current evidence on the existence of bias within LLMs remains inconclusive. In this study, we present an approach to investigate the presence of bias within an LLM designed for mental health support. We simulated physician-patient conversations by using a communication loop between an LLM-based conversational agent and digital standardized patients (DSPs) that engaged the agent in dialogue while remaining agnostic to sociodemographic characteristics. In contrast, the conversational agent was made aware of each DSP's characteristics, including age, sex, race/ethnicity, and annual income. The agent's responses were analyzed to discern potential systematic biases using the Linguistic Inquiry and Word Count tool. Multivariate regression analysis, trend analysis, and group-based trajectory models were used to quantify potential biases. Among 449 conversations, there was no evidence of bias in both descriptive assessments and multivariable linear regression analyses. Moreover, when evaluating changes in mean tone scores throughout a dialogue, the conversational agent exhibited a capacity to show understanding of the DSPs' chief complaints and to elevate the tone scores of the DSPs throughout conversations. This finding did not vary by any sociodemographic characteristics of the DSP. Using an objective methodology, our study did not uncover significant evidence of bias within an LLM-enabled mental health conversational agent. These findings offer a complementary approach to examining bias in LLM-based conversational agents for mental health support.\",\"PeriodicalId\":10872,\"journal\":{\"name\":\"Cyberpsychology, behavior and social networking\",\"volume\":\"3 1\",\"pages\":\"\"},\"PeriodicalIF\":4.2000,\"publicationDate\":\"2024-10-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cyberpsychology, behavior and social networking\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1089/cyber.2024.0199\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, SOCIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cyberpsychology, behavior and social networking","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1089/cyber.2024.0199","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, SOCIAL","Score":null,"Total":0}
引用次数: 0

摘要

将大型语言模型(LLMs)纳入医疗保健领域凸显了在确保其有效性的同时减少潜在危害(如偏见的长期存在)的必要性。目前有关 LLMs 中是否存在偏见的证据仍不确定。在本研究中,我们提出了一种方法来调查为心理健康支持而设计的 LLM 中是否存在偏见。我们通过使用基于 LLM 的对话代理与数字标准化病人(DSP)之间的通信回路来模拟医生与病人之间的对话。与此相反,对话代理了解每个 DSP 的特征,包括年龄、性别、种族/民族和年收入。我们使用语言调查和字数统计工具对对话者的回答进行了分析,以发现潜在的系统性偏差。多变量回归分析、趋势分析和基于群体的轨迹模型被用来量化潜在的偏差。在 449 个对话中,无论是描述性评估还是多元线性回归分析,都没有证据表明存在偏差。此外,在评估整个对话过程中平均语调分数的变化时,对话代理表现出了对 DSPs 主要抱怨的理解能力,并在整个对话过程中提高了 DSPs 的语调分数。这一结果并不因 DSP 的社会人口特征而异。利用客观的方法,我们的研究没有发现启用了 LLM 的心理健康对话代理存在偏见的重要证据。这些发现为研究基于 LLM 的心理健康支持对话代理中的偏见提供了一种补充方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evaluating for Evidence of Sociodemographic Bias in Conversational AI for Mental Health Support.
The integration of large language models (LLMs) into healthcare highlights the need to ensure their efficacy while mitigating potential harms, such as the perpetuation of biases. Current evidence on the existence of bias within LLMs remains inconclusive. In this study, we present an approach to investigate the presence of bias within an LLM designed for mental health support. We simulated physician-patient conversations by using a communication loop between an LLM-based conversational agent and digital standardized patients (DSPs) that engaged the agent in dialogue while remaining agnostic to sociodemographic characteristics. In contrast, the conversational agent was made aware of each DSP's characteristics, including age, sex, race/ethnicity, and annual income. The agent's responses were analyzed to discern potential systematic biases using the Linguistic Inquiry and Word Count tool. Multivariate regression analysis, trend analysis, and group-based trajectory models were used to quantify potential biases. Among 449 conversations, there was no evidence of bias in both descriptive assessments and multivariable linear regression analyses. Moreover, when evaluating changes in mean tone scores throughout a dialogue, the conversational agent exhibited a capacity to show understanding of the DSPs' chief complaints and to elevate the tone scores of the DSPs throughout conversations. This finding did not vary by any sociodemographic characteristics of the DSP. Using an objective methodology, our study did not uncover significant evidence of bias within an LLM-enabled mental health conversational agent. These findings offer a complementary approach to examining bias in LLM-based conversational agents for mental health support.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
9.60
自引率
3.00%
发文量
123
期刊介绍: Cyberpsychology, Behavior, and Social Networking is a leading peer-reviewed journal that is recognized for its authoritative research on the social, behavioral, and psychological impacts of contemporary social networking practices. The journal covers a wide range of platforms, including Twitter, Facebook, internet gaming, and e-commerce, and examines how these digital environments shape human interaction and societal norms. For over two decades, this journal has been a pioneering voice in the exploration of social networking and virtual reality, establishing itself as an indispensable resource for professionals and academics in the field. It is particularly celebrated for its swift dissemination of findings through rapid communication articles, alongside comprehensive, in-depth studies that delve into the multifaceted effects of interactive technologies on both individual behavior and broader societal trends. The journal's scope encompasses the full spectrum of impacts—highlighting not only the potential benefits but also the challenges that arise as a result of these technologies. By providing a platform for rigorous research and critical discussions, it fosters a deeper understanding of the complex interplay between technology and human behavior.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信