Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.

IF 5.8 2区 医学 Q1 PSYCHIATRY
Jmir Mental Health Pub Date : 2025-02-21 DOI:10.2196/60432
Mehrdad Rahsepar Meadi, Tomas Sillekens, Suzanne Metselaar, Anton van Balkom, Justin Bernstein, Neeltje Batelaan
{"title":"Exploring the Ethical Challenges of Conversational AI in Mental Health Care: Scoping Review.","authors":"Mehrdad Rahsepar Meadi, Tomas Sillekens, Suzanne Metselaar, Anton van Balkom, Justin Bernstein, Neeltje Batelaan","doi":"10.2196/60432","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.</p><p><strong>Objective: </strong>We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.</p><p><strong>Methods: </strong>We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.</p><p><strong>Results: </strong>We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of \"black box\" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles.</p><p><strong>Conclusions: </strong>Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.</p>","PeriodicalId":48616,"journal":{"name":"Jmir Mental Health","volume":"12 ","pages":"e60432"},"PeriodicalIF":5.8000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11890142/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Jmir Mental Health","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.2196/60432","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHIATRY","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Conversational artificial intelligence (CAI) is emerging as a promising digital technology for mental health care. CAI apps, such as psychotherapeutic chatbots, are available in app stores, but their use raises ethical concerns.

Objective: We aimed to provide a comprehensive overview of ethical considerations surrounding CAI as a therapist for individuals with mental health issues.

Methods: We conducted a systematic search across PubMed, Embase, APA PsycINFO, Web of Science, Scopus, the Philosopher's Index, and ACM Digital Library databases. Our search comprised 3 elements: embodied artificial intelligence, ethics, and mental health. We defined CAI as a conversational agent that interacts with a person and uses artificial intelligence to formulate output. We included articles discussing the ethical challenges of CAI functioning in the role of a therapist for individuals with mental health issues. We added additional articles through snowball searching. We included articles in English or Dutch. All types of articles were considered except abstracts of symposia. Screening for eligibility was done by 2 independent researchers (MRM and TS or AvB). An initial charting form was created based on the expected considerations and revised and complemented during the charting process. The ethical challenges were divided into themes. When a concern occurred in more than 2 articles, we identified it as a distinct theme.

Results: We included 101 articles, of which 95% (n=96) were published in 2018 or later. Most were reviews (n=22, 21.8%) followed by commentaries (n=17, 16.8%). The following 10 themes were distinguished: (1) safety and harm (discussed in 52/101, 51.5% of articles); the most common topics within this theme were suicidality and crisis management, harmful or wrong suggestions, and the risk of dependency on CAI; (2) explicability, transparency, and trust (n=26, 25.7%), including topics such as the effects of "black box" algorithms on trust; (3) responsibility and accountability (n=31, 30.7%); (4) empathy and humanness (n=29, 28.7%); (5) justice (n=41, 40.6%), including themes such as health inequalities due to differences in digital literacy; (6) anthropomorphization and deception (n=24, 23.8%); (7) autonomy (n=12, 11.9%); (8) effectiveness (n=38, 37.6%); (9) privacy and confidentiality (n=62, 61.4%); and (10) concerns for health care workers' jobs (n=16, 15.8%). Other themes were discussed in 9.9% (n=10) of the identified articles.

Conclusions: Our scoping review has comprehensively covered ethical aspects of CAI in mental health care. While certain themes remain underexplored and stakeholders' perspectives are insufficiently represented, this study highlights critical areas for further research. These include evaluating the risks and benefits of CAI in comparison to human therapists, determining its appropriate roles in therapeutic contexts and its impact on care access, and addressing accountability. Addressing these gaps can inform normative analysis and guide the development of ethical guidelines for responsible CAI use in mental health care.

Abstract Image

探索会话人工智能在精神卫生保健中的伦理挑战:范围审查。
背景:会话人工智能(CAI)正在成为一种有前途的精神卫生保健数字技术。人工智能应用程序,如心理治疗聊天机器人,在应用程序商店中可以买到,但它们的使用引发了道德问题。目的:我们旨在全面概述CAI作为心理健康问题个体治疗师的伦理考虑。方法:我们对PubMed、Embase、APA PsycINFO、Web of Science、Scopus、the Philosopher’s Index和ACM数字图书馆数据库进行了系统检索。我们的搜索包括3个要素:具身人工智能、伦理和心理健康。我们将CAI定义为与人交互的会话代理,并使用人工智能来制定输出。我们收录了一些文章,讨论CAI作为心理健康问题个体的治疗师所面临的伦理挑战。我们通过滚雪球搜索添加了额外的文章。我们收录了英语或荷兰语的文章。除了专题讨论会的摘要外,所有类型的文章都被考虑。合格筛选由2名独立研究人员(MRM和TS或AvB)完成。最初的图表形式是根据预期的考虑而创建的,并在图表过程中进行了修订和补充。伦理挑战被分为几个主题。当一个关注点在两篇以上的文章中出现时,我们将其确定为一个不同的主题。结果:我们纳入101篇文章,其中95% (n=96)发表于2018年及以后。大多数是评论(n=22, 21.8%),其次是评论(n=17, 16.8%)。区分了以下10个主题:(1)安全性和危害(52/101,51.5%的文章讨论);该主题中最常见的主题是自杀和危机管理,有害或错误的建议,以及依赖CAI的风险;(2)可解释性、透明度和信任(n=26, 25.7%),包括“黑箱”算法对信任的影响等主题;(3)责任和问责(n=31, 30.7%);(4)共情和人性(n=29, 28.7%);(5)正义(n=41, 40.6%),包括数字素养差异造成的卫生不平等等主题;(6)拟人化和欺骗(n=24, 23.8%);(7)自主性(n=12, 11.9%);(8)有效性(n=38, 37.6%);(9)隐私和保密(n=62, 61.4%);(10)对医护人员工作的关注(n=16, 15.8%)。9.9% (n=10)的文章讨论了其他主题。结论:我们的范围综述全面涵盖了精神卫生保健CAI的伦理方面。虽然某些主题仍未得到充分探讨,利益相关者的观点也没有得到充分代表,但本研究强调了进一步研究的关键领域。其中包括评估人工智能与人类治疗师相比的风险和益处,确定其在治疗环境中的适当作用及其对护理获取的影响,以及解决问责制问题。解决这些差距可以为规范分析提供信息,并指导制定在精神卫生保健中负责任地使用人工智能的道德准则。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Jmir Mental Health
Jmir Mental Health Medicine-Psychiatry and Mental Health
CiteScore
10.80
自引率
3.80%
发文量
104
审稿时长
16 weeks
期刊介绍: JMIR Mental Health (JMH, ISSN 2368-7959) is a PubMed-indexed, peer-reviewed sister journal of JMIR, the leading eHealth journal (Impact Factor 2016: 5.175). JMIR Mental Health focusses on digital health and Internet interventions, technologies and electronic innovations (software and hardware) for mental health, addictions, online counselling and behaviour change. This includes formative evaluation and system descriptions, theoretical papers, review papers, viewpoint/vision papers, and rigorous evaluations.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信