“这是人类智能调试人工智能”:研究人们如何在寻求心理健康支持时提示GPT

IF 5.1 2区 计算机科学 Q1 COMPUTER SCIENCE, CYBERNETICS
Zhuoyang Li , Zihao Zhu , Xinning Gui , Yuhan Luo
{"title":"“这是人类智能调试人工智能”:研究人们如何在寻求心理健康支持时提示GPT","authors":"Zhuoyang Li ,&nbsp;Zihao Zhu ,&nbsp;Xinning Gui ,&nbsp;Yuhan Luo","doi":"10.1016/j.ijhcs.2025.103555","DOIUrl":null,"url":null,"abstract":"<div><div>Large language models (LLMs) could extend digital support for mental well-being with their unprecedented language understanding and generation ability. While we have seen individuals who lack access to professional care utilizing LLMs for mental health support, it is unclear how they prompt and interact with LLMs given their individualized emotional needs and life situations. In this work, we analyzed 49 threads and 7,538 comments on Reddit, aiming to understand how people seek mental health support from GPT by creating and crafting various prompts. Despite GPT explicitly disclaiming that it is not an alternative to professional care, we found that users continued to use it for support and devised different prompts to bypass the safety guardrails. Meanwhile, users actively refined and shared their prompts to make GPT more human-like by specifying nuanced communication styles and cultivating in-depth discussions. They also came up with several strategies to make GPT communicate more efficiently to enrich the customized personas on the fly or gain multiple perspectives. Reflecting on these findings, we discuss the tensions associated with using LLMs for mental health support and the implications for designing safer and more empowering human-LLM interactions.</div></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"203 ","pages":"Article 103555"},"PeriodicalIF":5.1000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"“This is human intelligence debugging artificial intelligence”: Examining how people prompt GPT in seeking mental health support\",\"authors\":\"Zhuoyang Li ,&nbsp;Zihao Zhu ,&nbsp;Xinning Gui ,&nbsp;Yuhan Luo\",\"doi\":\"10.1016/j.ijhcs.2025.103555\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Large language models (LLMs) could extend digital support for mental well-being with their unprecedented language understanding and generation ability. While we have seen individuals who lack access to professional care utilizing LLMs for mental health support, it is unclear how they prompt and interact with LLMs given their individualized emotional needs and life situations. In this work, we analyzed 49 threads and 7,538 comments on Reddit, aiming to understand how people seek mental health support from GPT by creating and crafting various prompts. Despite GPT explicitly disclaiming that it is not an alternative to professional care, we found that users continued to use it for support and devised different prompts to bypass the safety guardrails. Meanwhile, users actively refined and shared their prompts to make GPT more human-like by specifying nuanced communication styles and cultivating in-depth discussions. They also came up with several strategies to make GPT communicate more efficiently to enrich the customized personas on the fly or gain multiple perspectives. Reflecting on these findings, we discuss the tensions associated with using LLMs for mental health support and the implications for designing safer and more empowering human-LLM interactions.</div></div>\",\"PeriodicalId\":54955,\"journal\":{\"name\":\"International Journal of Human-Computer Studies\",\"volume\":\"203 \",\"pages\":\"Article 103555\"},\"PeriodicalIF\":5.1000,\"publicationDate\":\"2025-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Human-Computer Studies\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S1071581925001120\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581925001120","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0

摘要

大型语言模型(llm)以其前所未有的语言理解和生成能力为心理健康提供了数字化支持。虽然我们看到缺乏专业护理的人利用法学硕士获得心理健康支持,但考虑到他们个性化的情感需求和生活状况,他们如何促进法学硕士并与之互动尚不清楚。在这项工作中,我们分析了Reddit上的49条线索和7538条评论,旨在通过创建和制作各种提示来了解人们如何从GPT寻求心理健康支持。尽管GPT明确否认它不是专业护理的替代品,但我们发现用户继续使用它来获得支持,并设计了不同的提示来绕过安全护栏。同时,用户积极地提炼和分享他们的提示,通过指定细致入微的沟通方式和培养深入的讨论,使GPT更加人性化。他们还提出了一些策略,使GPT更有效地沟通,以丰富动态定制角色或获得多个视角。根据这些发现,我们讨论了与使用法学硕士进行心理健康支持相关的紧张关系,以及设计更安全、更授权的人与法学硕士互动的意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。

“This is human intelligence debugging artificial intelligence”: Examining how people prompt GPT in seeking mental health support

“This is human intelligence debugging artificial intelligence”: Examining how people prompt GPT in seeking mental health support
Large language models (LLMs) could extend digital support for mental well-being with their unprecedented language understanding and generation ability. While we have seen individuals who lack access to professional care utilizing LLMs for mental health support, it is unclear how they prompt and interact with LLMs given their individualized emotional needs and life situations. In this work, we analyzed 49 threads and 7,538 comments on Reddit, aiming to understand how people seek mental health support from GPT by creating and crafting various prompts. Despite GPT explicitly disclaiming that it is not an alternative to professional care, we found that users continued to use it for support and devised different prompts to bypass the safety guardrails. Meanwhile, users actively refined and shared their prompts to make GPT more human-like by specifying nuanced communication styles and cultivating in-depth discussions. They also came up with several strategies to make GPT communicate more efficiently to enrich the customized personas on the fly or gain multiple perspectives. Reflecting on these findings, we discuss the tensions associated with using LLMs for mental health support and the implications for designing safer and more empowering human-LLM interactions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
International Journal of Human-Computer Studies
International Journal of Human-Computer Studies 工程技术-计算机:控制论
CiteScore
11.50
自引率
5.60%
发文量
108
审稿时长
3 months
期刊介绍: The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities. Research areas relevant to the journal include, but are not limited to: • Innovative interaction techniques • Multimodal interaction • Speech interaction • Graphic interaction • Natural language interaction • Interaction in mobile and embedded systems • Interface design and evaluation methodologies • Design and evaluation of innovative interactive systems • User interface prototyping and management systems • Ubiquitous computing • Wearable computers • Pervasive computing • Affective computing • Empirical studies of user behaviour • Empirical studies of programming and software engineering • Computer supported cooperative work • Computer mediated communication • Virtual reality • Mixed and augmented Reality • Intelligent user interfaces • Presence ...
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信