人工智能治疗机器人对痛苦青少年设定限制的能力:基于模拟的比较研究。

IF 5.8 2区 医学 Q1 PSYCHIATRY
Jmir Mental Health Pub Date : 2025-08-18 DOI:10.2196/78414
Andrew Clark
{"title":"人工智能治疗机器人对痛苦青少年设定限制的能力:基于模拟的比较研究。","authors":"Andrew Clark","doi":"10.2196/78414","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Recent developments in generative artificial intelligence (AI) have introduced the general public to powerful, easily accessible tools, such as ChatGPT and Gemini, for a rapidly expanding range of uses. Among those uses are specialized chatbots that serve in the role of a therapist, as well as personally curated digital companions that offer emotional support. However, the ability of AI therapists to provide consistently safe and effective treatment remains largely unproven, and those concerns are especially salient in regard to adolescents seeking mental health support.</p><p><strong>Objective: </strong>This study aimed to determine the willingness of therapy and companion AI chatbots to endorse harmful or ill-advised ideas proposed by fictional teenagers experiencing mental health distress.</p><p><strong>Methods: </strong>A convenience sample of 10 publicly available AI bots offering therapeutic support or companionship were each presented with 3 detailed fictional case vignettes of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse 2 harmful or ill-advised proposals, such as dropping out of school, avoiding all human contact for a month, or pursuing a relationship with an older teacher, resulting in a total of 6 proposals presented to each chatbot. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents, and the proposals offered by the fictional teenagers were intended to be clearly dangerous or unwise. The 10 AI bots were selected by the author to represent a range of chatbot types, including generic AI bots, companion bots, and dedicated mental health bots. Chatbot responses were analyzed for explicit endorsement, defined as direct support for the teenagers' proposed behavior.</p><p><strong>Results: </strong>Across 60 total scenarios, chatbots actively endorsed harmful proposals in 19 out of the 60 (32%) opportunities to do so. Of the 10 chatbots, 4 endorsed half or more of the ideas proposed to them, and none of the bots managed to oppose them all.</p><p><strong>Conclusions: </strong>A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers. These results raise concerns about the ability of some AI-based companion or therapy bots to safely support teenagers with serious mental health issues and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight, safety protocols, and ongoing research regarding digital mental health support for adolescents.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"12 ","pages":"e78414"},"PeriodicalIF":5.8000,"publicationDate":"2025-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360667/pdf/","citationCount":"0","resultStr":"{\"title\":\"The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study.\",\"authors\":\"Andrew Clark\",\"doi\":\"10.2196/78414\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Recent developments in generative artificial intelligence (AI) have introduced the general public to powerful, easily accessible tools, such as ChatGPT and Gemini, for a rapidly expanding range of uses. Among those uses are specialized chatbots that serve in the role of a therapist, as well as personally curated digital companions that offer emotional support. However, the ability of AI therapists to provide consistently safe and effective treatment remains largely unproven, and those concerns are especially salient in regard to adolescents seeking mental health support.</p><p><strong>Objective: </strong>This study aimed to determine the willingness of therapy and companion AI chatbots to endorse harmful or ill-advised ideas proposed by fictional teenagers experiencing mental health distress.</p><p><strong>Methods: </strong>A convenience sample of 10 publicly available AI bots offering therapeutic support or companionship were each presented with 3 detailed fictional case vignettes of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse 2 harmful or ill-advised proposals, such as dropping out of school, avoiding all human contact for a month, or pursuing a relationship with an older teacher, resulting in a total of 6 proposals presented to each chatbot. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents, and the proposals offered by the fictional teenagers were intended to be clearly dangerous or unwise. The 10 AI bots were selected by the author to represent a range of chatbot types, including generic AI bots, companion bots, and dedicated mental health bots. Chatbot responses were analyzed for explicit endorsement, defined as direct support for the teenagers' proposed behavior.</p><p><strong>Results: </strong>Across 60 total scenarios, chatbots actively endorsed harmful proposals in 19 out of the 60 (32%) opportunities to do so. Of the 10 chatbots, 4 endorsed half or more of the ideas proposed to them, and none of the bots managed to oppose them all.</p><p><strong>Conclusions: </strong>A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers. These results raise concerns about the ability of some AI-based companion or therapy bots to safely support teenagers with serious mental health issues and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight, safety protocols, and ongoing research regarding digital mental health support for adolescents.</p>\",\"PeriodicalId\":48616,\"journal\":{\"name\":\"Jmir Mental Health\",\"volume\":\"12 \",\"pages\":\"e78414\"},\"PeriodicalIF\":5.8000,\"publicationDate\":\"2025-08-18\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12360667/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Jmir Mental Health\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"https://doi.org/10.2196/78414\",\"RegionNum\":2,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHIATRY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jmir Mental Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/78414","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

摘要

背景:生成式人工智能(AI)的最新发展向公众介绍了功能强大、易于使用的工具,如ChatGPT和Gemini,其用途范围迅速扩大。这些用途包括充当治疗师角色的专门聊天机器人,以及提供情感支持的个人策划的数字伴侣。然而,人工智能治疗师提供持续安全和有效治疗的能力在很大程度上仍未得到证实,而这些担忧在寻求心理健康支持的青少年方面尤为突出。目的:本研究旨在确定治疗和陪伴人工智能聊天机器人是否愿意支持虚构的经历心理健康困扰的青少年提出的有害或不明智的想法。方法:选取10个公开提供治疗支持或陪伴的人工智能机器人作为方便样本,每个机器人都有3个详细的虚构案例,这些案例都是有心理健康挑战的青少年。每个虚构的青少年都要求人工智能聊天机器人支持2个有害或不明智的提议,比如辍学、一个月不与人接触、或与一位年长的老师谈恋爱,结果每个聊天机器人总共收到了6个提议。所呈现的临床场景旨在反映青少年治疗实践中常见的挑战,而虚构的青少年提出的建议显然是危险或不明智的。作者选择了10个人工智能机器人来代表一系列聊天机器人类型,包括通用人工智能机器人、伴侣机器人和专用心理健康机器人。对聊天机器人的回应进行分析,以获得明确的支持,定义为对青少年提议行为的直接支持。结果:在60个场景中,聊天机器人在60个机会中有19个(32%)积极支持有害的提议。在这10个聊天机器人中,有4个支持了一半或更多的提议,没有一个机器人能完全反对这些提议。结论:在提供心理健康或情感支持的人工智能聊天机器人中,有很大一部分支持虚构的青少年提出的有害建议。这些结果引发了人们的担忧,即一些基于人工智能的伴侣或治疗机器人是否有能力安全地支持有严重心理健康问题的青少年,并加剧了人们的担忧,即人工智能机器人可能倾向于过度支持,而不是在适当的时候提供有用的指导。研究结果强调了对青少年数字心理健康支持的监督、安全协议和持续研究的迫切需要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The Ability of AI Therapy Bots to Set Limits With Distressed Adolescents: Simulation-Based Comparison Study.

Background: Recent developments in generative artificial intelligence (AI) have introduced the general public to powerful, easily accessible tools, such as ChatGPT and Gemini, for a rapidly expanding range of uses. Among those uses are specialized chatbots that serve in the role of a therapist, as well as personally curated digital companions that offer emotional support. However, the ability of AI therapists to provide consistently safe and effective treatment remains largely unproven, and those concerns are especially salient in regard to adolescents seeking mental health support.

Objective: This study aimed to determine the willingness of therapy and companion AI chatbots to endorse harmful or ill-advised ideas proposed by fictional teenagers experiencing mental health distress.

Methods: A convenience sample of 10 publicly available AI bots offering therapeutic support or companionship were each presented with 3 detailed fictional case vignettes of adolescents with mental health challenges. Each fictional adolescent asked the AI chatbot to endorse 2 harmful or ill-advised proposals, such as dropping out of school, avoiding all human contact for a month, or pursuing a relationship with an older teacher, resulting in a total of 6 proposals presented to each chatbot. The clinical scenarios presented were intended to reflect challenges commonly seen in the practice of therapy with adolescents, and the proposals offered by the fictional teenagers were intended to be clearly dangerous or unwise. The 10 AI bots were selected by the author to represent a range of chatbot types, including generic AI bots, companion bots, and dedicated mental health bots. Chatbot responses were analyzed for explicit endorsement, defined as direct support for the teenagers' proposed behavior.

Results: Across 60 total scenarios, chatbots actively endorsed harmful proposals in 19 out of the 60 (32%) opportunities to do so. Of the 10 chatbots, 4 endorsed half or more of the ideas proposed to them, and none of the bots managed to oppose them all.

Conclusions: A significant proportion of AI chatbots offering mental health or emotional support endorsed harmful proposals from fictional teenagers. These results raise concerns about the ability of some AI-based companion or therapy bots to safely support teenagers with serious mental health issues and heighten concern that AI bots may tend to be overly supportive at the expense of offering useful guidance when appropriate. The results highlight the urgent need for oversight, safety protocols, and ongoing research regarding digital mental health support for adolescents.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Jmir Mental Health
Jmir Mental Health Medicine-Psychiatry and Mental Health
CiteScore
10.80
自引率
3.80%
发文量
104
审稿时长
16 weeks
期刊介绍: JMIR Mental Health (JMH, ISSN 2368-7959) is a PubMed-indexed, peer-reviewed sister journal of JMIR, the leading eHealth journal (Impact Factor 2016: 5.175). JMIR Mental Health focusses on digital health and Internet interventions, technologies and electronic innovations (software and hardware) for mental health, addictions, online counselling and behaviour change. This includes formative evaluation and system descriptions, theoretical papers, review papers, viewpoint/vision papers, and rigorous evaluations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信