Gabriella Warren-Smith , Guy Laban , Emily-Marie Pacheco , Emily S. Cross
{"title":"关于人类起源的知识线索有助于在与聊天机器人的互动中自我披露","authors":"Gabriella Warren-Smith , Guy Laban , Emily-Marie Pacheco , Emily S. Cross","doi":"10.1016/j.chbah.2025.100174","DOIUrl":null,"url":null,"abstract":"<div><div>Chatbots are emerging as a self-management tool for supporting mental health, appearing across commercial and healthcare settings. Whilst chatbots are valued for their perceived lack of judgement, they lack the emotional intelligence and empathy to build trust and rapport with users. A resulting debate questions whether chatbots facilitate or hinder self-disclosure. This study presents a within-subjects experimental design investigating the parameters of self-disclosure in social interactions with chatbots in an open domain. Participants engaged in two short social interactions with two chatbots: one with the knowledge they were conversing with a chatbot and one with the false belief they were conversing with a human. A significant difference was found across both treatments, with participants disclosing more to the chatbot that was introduced as a human, as well as perceiving themselves to do so, perceiving this chatbot as more comforting, and to be demonstrating higher rates of agency and experience compared to the chatbot that was introduced as a chatbot. However, significant findings also indicated participants’ disclosures to the chatbot that was introduced as a chatbot were more sentimental, and they found it to be friendlier compared to the chatbot that was introduced as a human. These results indicate that whilst cues to a chatbot’s human origins enhance self-disclosure and perceptions of mind, when the artificial agent is perceived against one’s social expectations, it may be viewed negatively on social factors that require higher cognitive processing.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100174"},"PeriodicalIF":0.0000,"publicationDate":"2025-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Knowledge cues to human origins facilitate self-disclosure during interactions with chatbots\",\"authors\":\"Gabriella Warren-Smith , Guy Laban , Emily-Marie Pacheco , Emily S. Cross\",\"doi\":\"10.1016/j.chbah.2025.100174\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Chatbots are emerging as a self-management tool for supporting mental health, appearing across commercial and healthcare settings. Whilst chatbots are valued for their perceived lack of judgement, they lack the emotional intelligence and empathy to build trust and rapport with users. A resulting debate questions whether chatbots facilitate or hinder self-disclosure. This study presents a within-subjects experimental design investigating the parameters of self-disclosure in social interactions with chatbots in an open domain. Participants engaged in two short social interactions with two chatbots: one with the knowledge they were conversing with a chatbot and one with the false belief they were conversing with a human. A significant difference was found across both treatments, with participants disclosing more to the chatbot that was introduced as a human, as well as perceiving themselves to do so, perceiving this chatbot as more comforting, and to be demonstrating higher rates of agency and experience compared to the chatbot that was introduced as a chatbot. However, significant findings also indicated participants’ disclosures to the chatbot that was introduced as a chatbot were more sentimental, and they found it to be friendlier compared to the chatbot that was introduced as a human. These results indicate that whilst cues to a chatbot’s human origins enhance self-disclosure and perceptions of mind, when the artificial agent is perceived against one’s social expectations, it may be viewed negatively on social factors that require higher cognitive processing.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"5 \",\"pages\":\"Article 100174\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000581\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000581","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Knowledge cues to human origins facilitate self-disclosure during interactions with chatbots
Chatbots are emerging as a self-management tool for supporting mental health, appearing across commercial and healthcare settings. Whilst chatbots are valued for their perceived lack of judgement, they lack the emotional intelligence and empathy to build trust and rapport with users. A resulting debate questions whether chatbots facilitate or hinder self-disclosure. This study presents a within-subjects experimental design investigating the parameters of self-disclosure in social interactions with chatbots in an open domain. Participants engaged in two short social interactions with two chatbots: one with the knowledge they were conversing with a chatbot and one with the false belief they were conversing with a human. A significant difference was found across both treatments, with participants disclosing more to the chatbot that was introduced as a human, as well as perceiving themselves to do so, perceiving this chatbot as more comforting, and to be demonstrating higher rates of agency and experience compared to the chatbot that was introduced as a chatbot. However, significant findings also indicated participants’ disclosures to the chatbot that was introduced as a chatbot were more sentimental, and they found it to be friendlier compared to the chatbot that was introduced as a human. These results indicate that whilst cues to a chatbot’s human origins enhance self-disclosure and perceptions of mind, when the artificial agent is perceived against one’s social expectations, it may be viewed negatively on social factors that require higher cognitive processing.