Unveiling Public Stigma for Borderline Personality Disorder: A Comparative Study of Artificial Intelligence and Mental Health Care Providers.

IF 2 3区 医学 Q3 PSYCHIATRY
Sean Lauderdale, Sarah A Griffin, Kelli R Lahman, Eno Mbaba, Shealyn Tomlinson
{"title":"Unveiling Public Stigma for Borderline Personality Disorder: A Comparative Study of Artificial Intelligence and Mental Health Care Providers.","authors":"Sean Lauderdale, Sarah A Griffin, Kelli R Lahman, Eno Mbaba, Shealyn Tomlinson","doi":"10.1002/pmh.70018","DOIUrl":null,"url":null,"abstract":"<p><p>Generative artificial intelligence (GAI) programs can identify symptoms and make recommendations for treatment for mental disorders, including borderline personality disorder (BPD). Despite GAI's potential as a clinical tool, stereotypes are inherent in their algorithms but not obvious until directly assessed. Given this concern, we assessed and compared GAIs' (ChatGPT-3.5, 4, and Google Gemini) symptom recognition and public stigma for a woman and man vignette character with BPD. The GAIs' responses were also compared to a sample of mental health care practitioners (MHCPs; n = 218). Compared to MHCPs, GAI showed more empathy for the characters. GAI were also less likely to view the characters' mental health symptoms as developmental stage problems and rated these symptoms as more chronic and unchangeable. The GAI also rated the vignette characters as less trustworthy and more likely to have difficulty forming close relationships than the MHCPs. Across GAI, gender biases were found with Google Gemini showing less empathy, more negative reactions, and greater public stigma, particularly for a woman with BPD, than either ChatGPT-3.5 or ChatGPT-4. A woman with BPD was also rated as having more chronic mental health problems than a man by all GAI. Overall, these results suggest that GAI may express empathy but reflects gender bias and stereotyped beliefs for people with BPD. Greater transparency and incorporation of knowledgeable MHCPs and people with lived experiences are needed in GAI training to reduce bias and enhance their accuracy prior to use in mental health applications.</p>","PeriodicalId":46871,"journal":{"name":"Personality and Mental Health","volume":"19 2","pages":"e70018"},"PeriodicalIF":2.0000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Personality and Mental Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1002/pmh.70018","RegionNum":3,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

Abstract

Generative artificial intelligence (GAI) programs can identify symptoms and make recommendations for treatment for mental disorders, including borderline personality disorder (BPD). Despite GAI's potential as a clinical tool, stereotypes are inherent in their algorithms but not obvious until directly assessed. Given this concern, we assessed and compared GAIs' (ChatGPT-3.5, 4, and Google Gemini) symptom recognition and public stigma for a woman and man vignette character with BPD. The GAIs' responses were also compared to a sample of mental health care practitioners (MHCPs; n = 218). Compared to MHCPs, GAI showed more empathy for the characters. GAI were also less likely to view the characters' mental health symptoms as developmental stage problems and rated these symptoms as more chronic and unchangeable. The GAI also rated the vignette characters as less trustworthy and more likely to have difficulty forming close relationships than the MHCPs. Across GAI, gender biases were found with Google Gemini showing less empathy, more negative reactions, and greater public stigma, particularly for a woman with BPD, than either ChatGPT-3.5 or ChatGPT-4. A woman with BPD was also rated as having more chronic mental health problems than a man by all GAI. Overall, these results suggest that GAI may express empathy but reflects gender bias and stereotyped beliefs for people with BPD. Greater transparency and incorporation of knowledgeable MHCPs and people with lived experiences are needed in GAI training to reduce bias and enhance their accuracy prior to use in mental health applications.

揭示边缘型人格障碍的公众耻辱:人工智能和精神卫生保健提供者的比较研究。
生成式人工智能(GAI)程序可以识别包括边缘型人格障碍(BPD)在内的精神障碍的症状并提出治疗建议。尽管GAI具有作为临床工具的潜力,但刻板印象在其算法中是固有的,但在直接评估之前并不明显。考虑到这一问题,我们评估并比较了GAIs (ChatGPT-3.5、4和谷歌Gemini)对女性和男性BPD患者的症状识别和公开污名。GAIs的回答也与精神卫生保健从业者(MHCPs;n = 218)。与mhcp相比,GAI对角色表现出更多的同理心。GAI也不太可能将角色的心理健康症状视为发育阶段的问题,并将这些症状评为慢性和不可改变的。GAI还认为,与mhcp相比,小插图人物不太值得信赖,更有可能难以建立亲密关系。在GAI中,与ChatGPT-3.5或ChatGPT-4相比,谷歌双子座的性别偏见表现出更少的同情心,更多的负面反应,以及更多的公众耻辱,特别是对于患有BPD的女性。在所有GAI中,患有BPD的女性也被评为比男性有更多的慢性精神健康问题。总的来说,这些结果表明GAI可能表达了BPD患者的同理心,但反映了性别偏见和刻板印象。在GAI培训中需要提高透明度并纳入知识渊博的mhcp和有实际经验的人,以便在将其用于精神卫生应用之前减少偏见并提高其准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
4.80
自引率
14.80%
发文量
38
期刊介绍: Personality and Mental Health: Multidisciplinary Studies from Personality Dysfunction to Criminal Behaviour aims to lead and shape the international field in this rapidly expanding area, uniting three distinct literatures: DSM-IV/ICD-10 defined personality disorders, psychopathy and offending behaviour. Through its multi-disciplinary and service orientated approach, Personality and Mental Health provides a peer-reviewed, authoritative resource for researchers, practitioners and policy makers working in the areas of personality and mental health.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信