Jessie Goldie, Simon Dennis, Lyndsey Hipgrave, Amanda Coleman
{"title":"在精神卫生保健中使用生成式人工智能聊天机器人的从业者观点:混合方法研究。","authors":"Jessie Goldie, Simon Dennis, Lyndsey Hipgrave, Amanda Coleman","doi":"10.2196/71065","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Generative artificial intelligence (AI) chatbots have the potential to improve mental health care for practitioners and clients. Evidence demonstrates that AI chatbots can assist with tasks such as documentation, research, counseling, and therapeutic exercises. However, research examining practitioners' perspectives is limited.</p><p><strong>Objective: </strong>This mixed-methods study investigates: (1) practitioners' perspectives on different uses of generative AI chatbots; (2) their likelihood of recommending chatbots to clients; and (3) whether recommendation likelihood increases after viewing a demonstration.</p><p><strong>Methods: </strong>Participants were 23 mental health practitioners, including 17 females and 6 males, with a mean age of 39.39 (SD 16.20) years. In 45-minute interviews, participants selected their 3 most helpful uses of chatbots from 11 options and rated their likelihood of recommending chatbots to clients on a Likert scale before and after an 11-minute chatbot demonstration.</p><p><strong>Results: </strong>Binomial tests found that Generating case notes was selected at greater-than-chance levels ( 15/23, 65%; P=.001), while Support with session planning (P=.86) and Identifying and suggesting literature (P=.10) were not. Although 55% (12/23) were likely to recommend chatbots to clients, a binomial test found no significant difference from the 50% threshold (P=.74). A paired samples t test found that recommendation likelihood increased significantly (19/23, 83%; P=.002) from predemonstration to postdemonstration.</p><p><strong>Conclusions: </strong>Findings suggest practitioners favor administrative uses of generative AI and are more likely to recommend chatbots to clients after exposure. This study highlights a need for practitioner education and guidelines to support safe and effective AI integration in mental health care.</p>","PeriodicalId":36351,"journal":{"name":"JMIR Human Factors","volume":"12 ","pages":"e71065"},"PeriodicalIF":3.0000,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12440320/pdf/","citationCount":"0","resultStr":"{\"title\":\"Practitioner Perspectives on the Uses of Generative AI Chatbots in Mental Health Care: Mixed Methods Study.\",\"authors\":\"Jessie Goldie, Simon Dennis, Lyndsey Hipgrave, Amanda Coleman\",\"doi\":\"10.2196/71065\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Background: </strong>Generative artificial intelligence (AI) chatbots have the potential to improve mental health care for practitioners and clients. Evidence demonstrates that AI chatbots can assist with tasks such as documentation, research, counseling, and therapeutic exercises. However, research examining practitioners' perspectives is limited.</p><p><strong>Objective: </strong>This mixed-methods study investigates: (1) practitioners' perspectives on different uses of generative AI chatbots; (2) their likelihood of recommending chatbots to clients; and (3) whether recommendation likelihood increases after viewing a demonstration.</p><p><strong>Methods: </strong>Participants were 23 mental health practitioners, including 17 females and 6 males, with a mean age of 39.39 (SD 16.20) years. In 45-minute interviews, participants selected their 3 most helpful uses of chatbots from 11 options and rated their likelihood of recommending chatbots to clients on a Likert scale before and after an 11-minute chatbot demonstration.</p><p><strong>Results: </strong>Binomial tests found that Generating case notes was selected at greater-than-chance levels ( 15/23, 65%; P=.001), while Support with session planning (P=.86) and Identifying and suggesting literature (P=.10) were not. Although 55% (12/23) were likely to recommend chatbots to clients, a binomial test found no significant difference from the 50% threshold (P=.74). A paired samples t test found that recommendation likelihood increased significantly (19/23, 83%; P=.002) from predemonstration to postdemonstration.</p><p><strong>Conclusions: </strong>Findings suggest practitioners favor administrative uses of generative AI and are more likely to recommend chatbots to clients after exposure. This study highlights a need for practitioner education and guidelines to support safe and effective AI integration in mental health care.</p>\",\"PeriodicalId\":36351,\"journal\":{\"name\":\"JMIR Human Factors\",\"volume\":\"12 \",\"pages\":\"e71065\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2025-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12440320/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"JMIR Human Factors\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2196/71065\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"JMIR Human Factors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2196/71065","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Practitioner Perspectives on the Uses of Generative AI Chatbots in Mental Health Care: Mixed Methods Study.
Background: Generative artificial intelligence (AI) chatbots have the potential to improve mental health care for practitioners and clients. Evidence demonstrates that AI chatbots can assist with tasks such as documentation, research, counseling, and therapeutic exercises. However, research examining practitioners' perspectives is limited.
Objective: This mixed-methods study investigates: (1) practitioners' perspectives on different uses of generative AI chatbots; (2) their likelihood of recommending chatbots to clients; and (3) whether recommendation likelihood increases after viewing a demonstration.
Methods: Participants were 23 mental health practitioners, including 17 females and 6 males, with a mean age of 39.39 (SD 16.20) years. In 45-minute interviews, participants selected their 3 most helpful uses of chatbots from 11 options and rated their likelihood of recommending chatbots to clients on a Likert scale before and after an 11-minute chatbot demonstration.
Results: Binomial tests found that Generating case notes was selected at greater-than-chance levels ( 15/23, 65%; P=.001), while Support with session planning (P=.86) and Identifying and suggesting literature (P=.10) were not. Although 55% (12/23) were likely to recommend chatbots to clients, a binomial test found no significant difference from the 50% threshold (P=.74). A paired samples t test found that recommendation likelihood increased significantly (19/23, 83%; P=.002) from predemonstration to postdemonstration.
Conclusions: Findings suggest practitioners favor administrative uses of generative AI and are more likely to recommend chatbots to clients after exposure. This study highlights a need for practitioner education and guidelines to support safe and effective AI integration in mental health care.