Lyndsey Hipgrave, Jessie Goldie, Simon Dennis, Amanda Coleman
{"title":"平衡风险和收益:临床医生对在精神卫生保健中使用生成式人工智能聊天机器人的看法。","authors":"Lyndsey Hipgrave, Jessie Goldie, Simon Dennis, Amanda Coleman","doi":"10.3389/fdgth.2025.1606291","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>The use of generative-AI chatbots has proliferated in mental health, to support both clients and clinicians across a range of uses. This paper aimed to explore the perspectives of mental health clinicians regarding the risks and benefits of integrating generative-AI chatbots into the mental health landscape.</p><p><strong>Methods: </strong>Twenty-three clinicians participated in a 45-minute virtual interview, in which a series of open-ended and scale-based questions were asked, and a demonstration of a mental health chatbot's potential capabilities was presented.</p><p><strong>Results: </strong>Participants highlighted several benefits of chatbots, such as their ability to administer homework tasks, provide multilingual support, enhance accessibility and affordability of mental healthcare, offer access to up-to-date research, and increase engagement in some client groups. However, they also identified risks, including the lack of regulation, data and privacy concerns, chatbots' limited understanding of client backgrounds, potential for client over-reliance on chatbots, incorrect treatment recommendations, and the inability to detect subtle communication cues, such as tone and eye contact. There was no significant finding to suggest that participants viewed either the risks or benefits as outweighing the other. Moreover, a demonstration of potential chatbot capabilities was not found to influence whether participants favoured the risks or benefits of chatbots.</p><p><strong>Discussion: </strong>Qualitative responses revealed that the balance of risks and benefits is highly contextual, varying based on the use case and the population group being served. This study contributes important insights from critical stakeholders for chatbot developers to consider in future iterations of AI tools for mental health.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1606291"},"PeriodicalIF":3.2000,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12158938/pdf/","citationCount":"0","resultStr":"{\"title\":\"Balancing risks and benefits: clinicians' perspectives on the use of generative AI chatbots in mental healthcare.\",\"authors\":\"Lyndsey Hipgrave, Jessie Goldie, Simon Dennis, Amanda Coleman\",\"doi\":\"10.3389/fdgth.2025.1606291\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>The use of generative-AI chatbots has proliferated in mental health, to support both clients and clinicians across a range of uses. This paper aimed to explore the perspectives of mental health clinicians regarding the risks and benefits of integrating generative-AI chatbots into the mental health landscape.</p><p><strong>Methods: </strong>Twenty-three clinicians participated in a 45-minute virtual interview, in which a series of open-ended and scale-based questions were asked, and a demonstration of a mental health chatbot's potential capabilities was presented.</p><p><strong>Results: </strong>Participants highlighted several benefits of chatbots, such as their ability to administer homework tasks, provide multilingual support, enhance accessibility and affordability of mental healthcare, offer access to up-to-date research, and increase engagement in some client groups. However, they also identified risks, including the lack of regulation, data and privacy concerns, chatbots' limited understanding of client backgrounds, potential for client over-reliance on chatbots, incorrect treatment recommendations, and the inability to detect subtle communication cues, such as tone and eye contact. There was no significant finding to suggest that participants viewed either the risks or benefits as outweighing the other. Moreover, a demonstration of potential chatbot capabilities was not found to influence whether participants favoured the risks or benefits of chatbots.</p><p><strong>Discussion: </strong>Qualitative responses revealed that the balance of risks and benefits is highly contextual, varying based on the use case and the population group being served. This study contributes important insights from critical stakeholders for chatbot developers to consider in future iterations of AI tools for mental health.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"7 \",\"pages\":\"1606291\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-05-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12158938/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2025.1606291\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1606291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Balancing risks and benefits: clinicians' perspectives on the use of generative AI chatbots in mental healthcare.
Introduction: The use of generative-AI chatbots has proliferated in mental health, to support both clients and clinicians across a range of uses. This paper aimed to explore the perspectives of mental health clinicians regarding the risks and benefits of integrating generative-AI chatbots into the mental health landscape.
Methods: Twenty-three clinicians participated in a 45-minute virtual interview, in which a series of open-ended and scale-based questions were asked, and a demonstration of a mental health chatbot's potential capabilities was presented.
Results: Participants highlighted several benefits of chatbots, such as their ability to administer homework tasks, provide multilingual support, enhance accessibility and affordability of mental healthcare, offer access to up-to-date research, and increase engagement in some client groups. However, they also identified risks, including the lack of regulation, data and privacy concerns, chatbots' limited understanding of client backgrounds, potential for client over-reliance on chatbots, incorrect treatment recommendations, and the inability to detect subtle communication cues, such as tone and eye contact. There was no significant finding to suggest that participants viewed either the risks or benefits as outweighing the other. Moreover, a demonstration of potential chatbot capabilities was not found to influence whether participants favoured the risks or benefits of chatbots.
Discussion: Qualitative responses revealed that the balance of risks and benefits is highly contextual, varying based on the use case and the population group being served. This study contributes important insights from critical stakeholders for chatbot developers to consider in future iterations of AI tools for mental health.