Arthur Bran Herbener , Michał Klincewicz , Lily Frank , Malene Flensborg Damholdt
{"title":"在精神卫生保健中实施会话代理的策略和后果的关键讨论","authors":"Arthur Bran Herbener , Michał Klincewicz , Lily Frank , Malene Flensborg Damholdt","doi":"10.1016/j.chbah.2025.100182","DOIUrl":null,"url":null,"abstract":"<div><div>In recent years, there has been growing optimism about the potential of conversational agents, such as chatbots and social robots, in mental healthcare. Their scalability offers a promising solution to some of the key limitations of the dominant model of treatment in Western countries. However, while recent experimental research provides grounds for cautious optimism, the integration of conversational agents into mental healthcare raises significant clinical and ethical challenges, particularly concerning the partial or full replacement of human practitioners. Overall, this theoretical paper examines the clinical and ethical implications of deploying conversational agents in mental health services as <em>partial</em> and <em>full</em> replacement of human practitioners. On the one hand, we outline how these agents can circumvent core treatment barriers through stepped care, blended care, and a personalized medicine approach. On the other hand, we argue that the partial and full substitution of human practitioners can have profound consequences for the ethical landscape of mental healthcare, potentially undermining patients’ rights and safety. By making this argument, this work extends prior literature by specifically considering how different levels of implementation of conversational agents in healthcare present both opportunities and risks. We argue for the urgent need to establish regulatory frameworks to ensure that the integration of conversational agents into mental healthcare is both safe and ethically sound.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100182"},"PeriodicalIF":0.0000,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A critical discussion of strategies and ramifications of implementing conversational agents in mental healthcare\",\"authors\":\"Arthur Bran Herbener , Michał Klincewicz , Lily Frank , Malene Flensborg Damholdt\",\"doi\":\"10.1016/j.chbah.2025.100182\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>In recent years, there has been growing optimism about the potential of conversational agents, such as chatbots and social robots, in mental healthcare. Their scalability offers a promising solution to some of the key limitations of the dominant model of treatment in Western countries. However, while recent experimental research provides grounds for cautious optimism, the integration of conversational agents into mental healthcare raises significant clinical and ethical challenges, particularly concerning the partial or full replacement of human practitioners. Overall, this theoretical paper examines the clinical and ethical implications of deploying conversational agents in mental health services as <em>partial</em> and <em>full</em> replacement of human practitioners. On the one hand, we outline how these agents can circumvent core treatment barriers through stepped care, blended care, and a personalized medicine approach. On the other hand, we argue that the partial and full substitution of human practitioners can have profound consequences for the ethical landscape of mental healthcare, potentially undermining patients’ rights and safety. By making this argument, this work extends prior literature by specifically considering how different levels of implementation of conversational agents in healthcare present both opportunities and risks. We argue for the urgent need to establish regulatory frameworks to ensure that the integration of conversational agents into mental healthcare is both safe and ethically sound.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"5 \",\"pages\":\"Article 100182\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-07-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000660\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000660","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A critical discussion of strategies and ramifications of implementing conversational agents in mental healthcare
In recent years, there has been growing optimism about the potential of conversational agents, such as chatbots and social robots, in mental healthcare. Their scalability offers a promising solution to some of the key limitations of the dominant model of treatment in Western countries. However, while recent experimental research provides grounds for cautious optimism, the integration of conversational agents into mental healthcare raises significant clinical and ethical challenges, particularly concerning the partial or full replacement of human practitioners. Overall, this theoretical paper examines the clinical and ethical implications of deploying conversational agents in mental health services as partial and full replacement of human practitioners. On the one hand, we outline how these agents can circumvent core treatment barriers through stepped care, blended care, and a personalized medicine approach. On the other hand, we argue that the partial and full substitution of human practitioners can have profound consequences for the ethical landscape of mental healthcare, potentially undermining patients’ rights and safety. By making this argument, this work extends prior literature by specifically considering how different levels of implementation of conversational agents in healthcare present both opportunities and risks. We argue for the urgent need to establish regulatory frameworks to ensure that the integration of conversational agents into mental healthcare is both safe and ethically sound.