Christopher You, Rashi Ghosh, Melissa Vilaro, Roshan Venkatakrishnan, Rohith Venkatakrishnan, Andrew Maxim, Xuening Peng, Danish Tamboli, Benjamin Lok
{"title":"改变自我改变参与:换位思考可以提高对人工智能聊天机器人的披露数量和深度,从而促进心理健康。","authors":"Christopher You, Rashi Ghosh, Melissa Vilaro, Roshan Venkatakrishnan, Rohith Venkatakrishnan, Andrew Maxim, Xuening Peng, Danish Tamboli, Benjamin Lok","doi":"10.3389/fdgth.2025.1655860","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Emotionally intelligent AI chatbots are increasingly used to support college students' mental wellbeing. Yet, adoption remains limited, as users often hesitate to open up due to emotional barriers and vulnerability. Improving chatbot design may reduce some barriers, but users still bear the emotional burden of opening up and overcoming vulnerability. This study explores whether <i>perspective-taking</i> can support user disclosure by addressing underlying psychological barriers.</p><p><strong>Methods: </strong>In this between-subjects study, 96 students engaged in a brief reflective conversation with an embodied AI chatbot. <b>Perspective-Taking</b> participants defined and imagined a designated other's perspective and responded from that viewpoint. <b>Control</b> participants provided self-information and responded from their own perspective. Disclosure was measured by quantity (word count) and depth (information, thoughts, and feelings). Additional immediate measures captured readiness, intentions for mental wellbeing, and attitudes toward the chatbot and intervention.</p><p><strong>Results: </strong><b>Perspective-Taking</b> participants disclosed significantly greater quantity, overall depth, thoughts depth, and frequencies of high disclosures of thoughts and information. Both groups showed significant improvements in readiness and intention to address mental wellbeing, with no difference in improvement magnitude. However, <b>Control</b> participants reported significantly lower (better) skepticism towards the intervention and greater increases in willingness to engage with AI chatbots comparatively.</p><p><strong>Discussion: </strong>This study highlights how perspective-taking and distancing may facilitate greater disclosure to AI chatbots supporting mental wellbeing. We explore the nature of these disclosures and how perspective-taking may drive readiness and enrich the substance of disclosures. These findings suggest a way for chatbots to evoke deeper reflection and effective support while potentially reducing the need to share sensitive personal self-information directly with generative AI systems.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"7 ","pages":"1655860"},"PeriodicalIF":3.2000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12457298/pdf/","citationCount":"0","resultStr":"{\"title\":\"Alter egos alter engagement: perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing.\",\"authors\":\"Christopher You, Rashi Ghosh, Melissa Vilaro, Roshan Venkatakrishnan, Rohith Venkatakrishnan, Andrew Maxim, Xuening Peng, Danish Tamboli, Benjamin Lok\",\"doi\":\"10.3389/fdgth.2025.1655860\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Emotionally intelligent AI chatbots are increasingly used to support college students' mental wellbeing. Yet, adoption remains limited, as users often hesitate to open up due to emotional barriers and vulnerability. Improving chatbot design may reduce some barriers, but users still bear the emotional burden of opening up and overcoming vulnerability. This study explores whether <i>perspective-taking</i> can support user disclosure by addressing underlying psychological barriers.</p><p><strong>Methods: </strong>In this between-subjects study, 96 students engaged in a brief reflective conversation with an embodied AI chatbot. <b>Perspective-Taking</b> participants defined and imagined a designated other's perspective and responded from that viewpoint. <b>Control</b> participants provided self-information and responded from their own perspective. Disclosure was measured by quantity (word count) and depth (information, thoughts, and feelings). Additional immediate measures captured readiness, intentions for mental wellbeing, and attitudes toward the chatbot and intervention.</p><p><strong>Results: </strong><b>Perspective-Taking</b> participants disclosed significantly greater quantity, overall depth, thoughts depth, and frequencies of high disclosures of thoughts and information. Both groups showed significant improvements in readiness and intention to address mental wellbeing, with no difference in improvement magnitude. However, <b>Control</b> participants reported significantly lower (better) skepticism towards the intervention and greater increases in willingness to engage with AI chatbots comparatively.</p><p><strong>Discussion: </strong>This study highlights how perspective-taking and distancing may facilitate greater disclosure to AI chatbots supporting mental wellbeing. We explore the nature of these disclosures and how perspective-taking may drive readiness and enrich the substance of disclosures. These findings suggest a way for chatbots to evoke deeper reflection and effective support while potentially reducing the need to share sensitive personal self-information directly with generative AI systems.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"7 \",\"pages\":\"1655860\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2025-09-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12457298/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2025.1655860\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2025/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2025.1655860","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Alter egos alter engagement: perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing.
Introduction: Emotionally intelligent AI chatbots are increasingly used to support college students' mental wellbeing. Yet, adoption remains limited, as users often hesitate to open up due to emotional barriers and vulnerability. Improving chatbot design may reduce some barriers, but users still bear the emotional burden of opening up and overcoming vulnerability. This study explores whether perspective-taking can support user disclosure by addressing underlying psychological barriers.
Methods: In this between-subjects study, 96 students engaged in a brief reflective conversation with an embodied AI chatbot. Perspective-Taking participants defined and imagined a designated other's perspective and responded from that viewpoint. Control participants provided self-information and responded from their own perspective. Disclosure was measured by quantity (word count) and depth (information, thoughts, and feelings). Additional immediate measures captured readiness, intentions for mental wellbeing, and attitudes toward the chatbot and intervention.
Results: Perspective-Taking participants disclosed significantly greater quantity, overall depth, thoughts depth, and frequencies of high disclosures of thoughts and information. Both groups showed significant improvements in readiness and intention to address mental wellbeing, with no difference in improvement magnitude. However, Control participants reported significantly lower (better) skepticism towards the intervention and greater increases in willingness to engage with AI chatbots comparatively.
Discussion: This study highlights how perspective-taking and distancing may facilitate greater disclosure to AI chatbots supporting mental wellbeing. We explore the nature of these disclosures and how perspective-taking may drive readiness and enrich the substance of disclosures. These findings suggest a way for chatbots to evoke deeper reflection and effective support while potentially reducing the need to share sensitive personal self-information directly with generative AI systems.