{"title":"计算机与人类的相似性对心理社会评估中信息披露的影响:一项随机试验","authors":"Tingting Zhu, Elizabeth Broadbent","doi":"10.1016/j.chb.2025.108683","DOIUrl":null,"url":null,"abstract":"<div><div>Highly realistic embodied conversational agents (virtual humans) are starting to be used in healthcare, yet we know little about how their human likeness affects self-disclosure compared to other digital assessment methods. This research aimed to investigate the effects of a computer's human likeness on disclosure during a psychosocial assessment. 160 participants (mean age 24 years, 117 females, 42 males, 1 gender diverse) were randomized to receive a psychosocial interview from a realistic virtual human, a text chatbot, or an online questionnaire. The assessment comprised 18 close-ended items and six open-ended questions on diet, exercise, sexual practices, substance use, recent emotional experiences and loneliness. Socially desirable responding and unwillingness to respond were identified from answers, amount of disclosure was assessed from word count, and perceived anthropomorphism was assessed using self-report. Results demonstrated that for sensitive questions, there was higher socially desirable responding in the virtual human group as shown by significantly lower loneliness scores and higher rates of declining to answer a question compared to the other groups. For non-sensitive questions, socially desirable responding and proportion of declined answers did not differ by group. Participants in the virtual human group used more words when reporting stressful events and positive emotional experiences. Thematic analysis showed people felt rapport with both the virtual human and chatbot, but some felt social evaluative pressure with the virtual human. This supports theories that as human likeness cues increase, humans treat robots more socially. These findings should be considered when humanlike technologies are used clinically.</div></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"169 ","pages":"Article 108683"},"PeriodicalIF":9.0000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Effects of a computer's human likeness on disclosure in psychosocial assessments: A randomised trial\",\"authors\":\"Tingting Zhu, Elizabeth Broadbent\",\"doi\":\"10.1016/j.chb.2025.108683\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Highly realistic embodied conversational agents (virtual humans) are starting to be used in healthcare, yet we know little about how their human likeness affects self-disclosure compared to other digital assessment methods. This research aimed to investigate the effects of a computer's human likeness on disclosure during a psychosocial assessment. 160 participants (mean age 24 years, 117 females, 42 males, 1 gender diverse) were randomized to receive a psychosocial interview from a realistic virtual human, a text chatbot, or an online questionnaire. The assessment comprised 18 close-ended items and six open-ended questions on diet, exercise, sexual practices, substance use, recent emotional experiences and loneliness. Socially desirable responding and unwillingness to respond were identified from answers, amount of disclosure was assessed from word count, and perceived anthropomorphism was assessed using self-report. Results demonstrated that for sensitive questions, there was higher socially desirable responding in the virtual human group as shown by significantly lower loneliness scores and higher rates of declining to answer a question compared to the other groups. For non-sensitive questions, socially desirable responding and proportion of declined answers did not differ by group. Participants in the virtual human group used more words when reporting stressful events and positive emotional experiences. Thematic analysis showed people felt rapport with both the virtual human and chatbot, but some felt social evaluative pressure with the virtual human. This supports theories that as human likeness cues increase, humans treat robots more socially. These findings should be considered when humanlike technologies are used clinically.</div></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":\"169 \",\"pages\":\"Article 108683\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S074756322500130X\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322500130X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Effects of a computer's human likeness on disclosure in psychosocial assessments: A randomised trial
Highly realistic embodied conversational agents (virtual humans) are starting to be used in healthcare, yet we know little about how their human likeness affects self-disclosure compared to other digital assessment methods. This research aimed to investigate the effects of a computer's human likeness on disclosure during a psychosocial assessment. 160 participants (mean age 24 years, 117 females, 42 males, 1 gender diverse) were randomized to receive a psychosocial interview from a realistic virtual human, a text chatbot, or an online questionnaire. The assessment comprised 18 close-ended items and six open-ended questions on diet, exercise, sexual practices, substance use, recent emotional experiences and loneliness. Socially desirable responding and unwillingness to respond were identified from answers, amount of disclosure was assessed from word count, and perceived anthropomorphism was assessed using self-report. Results demonstrated that for sensitive questions, there was higher socially desirable responding in the virtual human group as shown by significantly lower loneliness scores and higher rates of declining to answer a question compared to the other groups. For non-sensitive questions, socially desirable responding and proportion of declined answers did not differ by group. Participants in the virtual human group used more words when reporting stressful events and positive emotional experiences. Thematic analysis showed people felt rapport with both the virtual human and chatbot, but some felt social evaluative pressure with the virtual human. This supports theories that as human likeness cues increase, humans treat robots more socially. These findings should be considered when humanlike technologies are used clinically.
期刊介绍:
Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.