{"title":"The way you assess matters: User interaction design of survey chatbots for mental health","authors":"Yucheng Jin, Li Chen, Xianglin Zhao, Wanling Cai","doi":"10.1016/j.ijhcs.2024.103290","DOIUrl":null,"url":null,"abstract":"<div><p>The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the <em>interaction style</em> of Closed-EQs and the <em>questionnaire length</em> affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (<em>interaction style</em>: form-based vs. conversation-based) <span><math><mo>×</mo></math></span> 3 (<em>questionnaire length</em>: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.</p></div>","PeriodicalId":54955,"journal":{"name":"International Journal of Human-Computer Studies","volume":"189 ","pages":"Article 103290"},"PeriodicalIF":5.3000,"publicationDate":"2024-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Human-Computer Studies","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1071581924000740","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
Abstract
The global pandemic has pushed human society into a mental health crisis, prompting the development of various chatbots to supplement the limited mental health workforce. Several organizations have employed mental health survey chatbots for public mental status assessments. These survey chatbots typically ask closed-ended questions (Closed-EQs) to assess specific psychological issues like anxiety, depression, and loneliness, followed by open-ended questions (Open-EQs) for deeper insights. While Open-EQs are naturally presented conversationally in a survey chatbot, Closed-EQs can be delivered as embedded forms or within conversations, with the length of the questionnaire varying according to the psychological assessment. This study investigates how the interaction style of Closed-EQs and the questionnaire length affect user perceptions regarding survey credibility, enjoyment, and self-awareness, as well as their responses to Open-EQs in terms of quality and self-disclosure in a survey chatbot. We conducted a 2 (interaction style: form-based vs. conversation-based) 3 (questionnaire length: short vs. middle vs. long) between-subjects study (N=213) with a loneliness survey chatbot. The results indicate that the form-based interaction significantly enhances the perceived credibility of the assessment, thereby improving response quality and self-disclosure in subsequent Open-EQs and fostering self-awareness. We discuss our findings for the interaction design of psychological assessment in a survey chatbot for mental health.
期刊介绍:
The International Journal of Human-Computer Studies publishes original research over the whole spectrum of work relevant to the theory and practice of innovative interactive systems. The journal is inherently interdisciplinary, covering research in computing, artificial intelligence, psychology, linguistics, communication, design, engineering, and social organization, which is relevant to the design, analysis, evaluation and application of innovative interactive systems. Papers at the boundaries of these disciplines are especially welcome, as it is our view that interdisciplinary approaches are needed for producing theoretical insights in this complex area and for effective deployment of innovative technologies in concrete user communities.
Research areas relevant to the journal include, but are not limited to:
• Innovative interaction techniques
• Multimodal interaction
• Speech interaction
• Graphic interaction
• Natural language interaction
• Interaction in mobile and embedded systems
• Interface design and evaluation methodologies
• Design and evaluation of innovative interactive systems
• User interface prototyping and management systems
• Ubiquitous computing
• Wearable computers
• Pervasive computing
• Affective computing
• Empirical studies of user behaviour
• Empirical studies of programming and software engineering
• Computer supported cooperative work
• Computer mediated communication
• Virtual reality
• Mixed and augmented Reality
• Intelligent user interfaces
• Presence
...