Public Perception on Artificial Intelligence-Driven Mental Health Interventions: Survey Research.

IF 2 Q3 HEALTH CARE SCIENCES & SERVICES
Mahima Anna Varghese, Poonam Sharma, Maitreyee Patwardhan
{"title":"Public Perception on Artificial Intelligence-Driven Mental Health Interventions: Survey Research.","authors":"Mahima Anna Varghese, Poonam Sharma, Maitreyee Patwardhan","doi":"10.2196/64380","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) has become increasingly important in health care, generating both curiosity and concern. With a doctor-patient ratio of 1:834 in India, AI has the potential to alleviate a significant health care burden. Public perception plays a crucial role in shaping attitudes that can facilitate the adoption of new technologies. Similarly, the acceptance of AI-driven mental health interventions is crucial in determining their effectiveness and widespread adoption. Therefore, it is essential to study public perceptions and usage of existing AI-driven mental health interventions by exploring user experiences and opinions on their future applicability, particularly in comparison to traditional, human-based interventions.</p><p><strong>Objective: </strong>This study aims to explore the use, perception, and acceptance of AI-driven mental health interventions in comparison to traditional, human-based interventions.</p><p><strong>Methods: </strong>A total of 466 adult participants from India voluntarily completed a 30-item web-based survey on the use and perception of AI-based mental health interventions between November and December 2023.</p><p><strong>Results: </strong>Of the 466 respondents, only 163 (35%) had ever consulted a mental health professional. Additionally, 305 (65.5%) reported very low knowledge of AI-driven interventions. In terms of trust, 247 (53%) expressed a moderate level of Trust in AI-Driven Mental Health Interventions, while only 24 (5.2%) reported a high level of trust. By contrast, 114 (24.5%) reported high trust and 309 (66.3%) reported moderate Trust in Human-Based Mental Health Interventions; 242 (51.9%) participants reported a high level of stigma associated with using human-based interventions, compared with only 50 (10.7%) who expressed concerns about stigma related to AI-driven interventions. Additionally, 162 (34.8%) expressed a positive outlook toward the future use and social acceptance of AI-based interventions. The majority of respondents indicated that AI could be a useful option for providing general mental health tips and conducting initial assessments. The key benefits of AI highlighted by participants were accessibility, cost-effectiveness, 24/7 availability, and reduced stigma. Major concerns included data privacy, security, the lack of human touch, and the potential for misdiagnosis.</p><p><strong>Conclusions: </strong>There is a general lack of awareness about AI-driven mental health interventions. However, AI shows potential as a viable option for prevention, primary assessment, and ongoing mental health maintenance. Currently, people tend to trust traditional mental health practices more. Stigma remains a significant barrier to accessing traditional mental health services. Currently, the human touch remains an indispensable aspect of human-based mental health care, one that AI cannot replace. However, integrating AI with human mental health professionals is seen as a compelling model. AI is positively perceived in terms of accessibility, availability, and destigmatization. Knowledge and perceived trustworthiness are key factors influencing the acceptance and effectiveness of AI-driven mental health interventions.</p>","PeriodicalId":14841,"journal":{"name":"JMIR Formative Research","volume":"8 ","pages":"e64380"},"PeriodicalIF":2.0000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638687/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Formative Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/64380","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Artificial intelligence (AI) has become increasingly important in health care, generating both curiosity and concern. With a doctor-patient ratio of 1:834 in India, AI has the potential to alleviate a significant health care burden. Public perception plays a crucial role in shaping attitudes that can facilitate the adoption of new technologies. Similarly, the acceptance of AI-driven mental health interventions is crucial in determining their effectiveness and widespread adoption. Therefore, it is essential to study public perceptions and usage of existing AI-driven mental health interventions by exploring user experiences and opinions on their future applicability, particularly in comparison to traditional, human-based interventions.

Objective: This study aims to explore the use, perception, and acceptance of AI-driven mental health interventions in comparison to traditional, human-based interventions.

Methods: A total of 466 adult participants from India voluntarily completed a 30-item web-based survey on the use and perception of AI-based mental health interventions between November and December 2023.

Results: Of the 466 respondents, only 163 (35%) had ever consulted a mental health professional. Additionally, 305 (65.5%) reported very low knowledge of AI-driven interventions. In terms of trust, 247 (53%) expressed a moderate level of Trust in AI-Driven Mental Health Interventions, while only 24 (5.2%) reported a high level of trust. By contrast, 114 (24.5%) reported high trust and 309 (66.3%) reported moderate Trust in Human-Based Mental Health Interventions; 242 (51.9%) participants reported a high level of stigma associated with using human-based interventions, compared with only 50 (10.7%) who expressed concerns about stigma related to AI-driven interventions. Additionally, 162 (34.8%) expressed a positive outlook toward the future use and social acceptance of AI-based interventions. The majority of respondents indicated that AI could be a useful option for providing general mental health tips and conducting initial assessments. The key benefits of AI highlighted by participants were accessibility, cost-effectiveness, 24/7 availability, and reduced stigma. Major concerns included data privacy, security, the lack of human touch, and the potential for misdiagnosis.

Conclusions: There is a general lack of awareness about AI-driven mental health interventions. However, AI shows potential as a viable option for prevention, primary assessment, and ongoing mental health maintenance. Currently, people tend to trust traditional mental health practices more. Stigma remains a significant barrier to accessing traditional mental health services. Currently, the human touch remains an indispensable aspect of human-based mental health care, one that AI cannot replace. However, integrating AI with human mental health professionals is seen as a compelling model. AI is positively perceived in terms of accessibility, availability, and destigmatization. Knowledge and perceived trustworthiness are key factors influencing the acceptance and effectiveness of AI-driven mental health interventions.

公众对人工智能驱动的心理健康干预的认知:调查研究
背景:人工智能(AI)在医疗保健领域变得越来越重要,引起了人们的好奇和关注。印度的医患比例为1:834,人工智能有可能减轻一项重大的卫生保健负担。公众的看法在形成有利于新技术采用的态度方面起着至关重要的作用。同样,接受人工智能驱动的精神卫生干预措施对于确定其有效性和广泛采用至关重要。因此,必须研究公众对现有人工智能驱动的心理健康干预措施的看法和使用情况,探索用户体验和对其未来适用性的意见,特别是与传统的基于人的干预措施进行比较。目的:本研究旨在探讨人工智能驱动的心理健康干预措施的使用、感知和接受程度,并与传统的、基于人的干预措施进行比较。方法:在2023年11月至12月期间,共有466名来自印度的成年参与者自愿完成了一项关于基于人工智能的心理健康干预措施的使用和感知的30项网络调查。结果:在466名受访者中,只有163人(35%)曾咨询过心理健康专业人员。此外,305人(65.5%)表示对人工智能驱动的干预措施知之甚少。在信任方面,247人(53%)对人工智能驱动的心理健康干预措施表示中等程度的信任,而只有24人(5.2%)表示高度信任。相比之下,114人(24.5%)报告对以人为本的心理健康干预的高度信任,309人(66.3%)报告中等信任;242名(51.9%)参与者报告了与使用基于人类的干预措施相关的高度耻辱感,相比之下,只有50名(10.7%)参与者表示担心与人工智能驱动的干预措施相关的耻辱感。此外,162人(34.8%)对未来人工智能干预的使用和社会接受度持积极态度。大多数受访者表示,人工智能可能是提供一般心理健康提示和进行初步评估的有用选择。参与者强调的人工智能的主要好处是可访问性、成本效益、24/7可用性和减少污名。主要的担忧包括数据隐私、安全、缺乏人情味以及误诊的可能性。结论:人们普遍缺乏对人工智能驱动的心理健康干预措施的认识。然而,人工智能显示出作为预防、初步评估和持续精神健康维护的可行选择的潜力。目前,人们倾向于更信任传统的心理健康实践。耻辱仍然是获得传统精神卫生服务的一个重大障碍。目前,人类接触仍然是以人为本的精神卫生保健不可或缺的一个方面,这是人工智能无法取代的。然而,将人工智能与人类心理健康专业人员相结合被视为一种引人注目的模式。人工智能在可访问性、可用性和去污名化方面得到了积极的认知。知识和感知的可信度是影响人工智能驱动的心理健康干预措施的接受度和有效性的关键因素。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
JMIR Formative Research
JMIR Formative Research Medicine-Medicine (miscellaneous)
CiteScore
2.70
自引率
9.10%
发文量
579
审稿时长
12 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信