Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo
{"title":"Trusting the machine: Exploring participant perceptions of AI-driven summaries in virtual focus groups with and without human oversight","authors":"Ye Wang , Huan Chen , Xiaofan Wei , Cheng Chang , Xinyi Zuo","doi":"10.1016/j.chbah.2025.100198","DOIUrl":"10.1016/j.chbah.2025.100198","url":null,"abstract":"<div><div>This study explores the use of AI-assisted summarization as part of a proposed AI moderation assistant for virtual focus group (VFG) settings, focusing on the calibration of trust through human oversight and transparency. To understand participant perspectives, this study employed a mixed-method approach: Study 1 conducted a focus group to gather initial data for the stimulus design of Study 2, and Study 2 was an online experiment that collected both quantitative and qualitative measures of perceptions of AI summarization across three groups—a control group, and two treatment groups (with vs. without human oversight). ANOVA and AI-assisted thematic analyses were performed. The findings indicate that AI summaries, with or without human oversight, were positively received by participants. However, no notable differences were observed in participants' satisfaction with the VFG application attributable to AI summaries. Qualitative findings reveal that participants appreciate AI's efficiency in summarization but express concerns about accuracy, authenticity, and the potential for AI to lack genuine human understanding. The findings contribute to the literature on trust in AI by demonstrating that <strong>trust can be achieved through transparency</strong>. By revealing the <strong>coexistence of AI appreciation and aversion</strong>, the study offers nuanced insights into <strong>trust calibration</strong> within <strong>socially and emotionally sensitive communication contexts</strong>. These results also inform the <strong>integration of AI summarization into qualitative research workflows</strong>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100198"},"PeriodicalIF":0.0,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144934229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Effects of sensory reactivity and haptic interaction on children's anthropomorphism of a haptic robot","authors":"Hikaru Nozawa, Masaharu Kato","doi":"10.1016/j.chbah.2025.100186","DOIUrl":"10.1016/j.chbah.2025.100186","url":null,"abstract":"<div><div>Social touch is vital for developing stable attachments and social skills, and haptic robots could provide children opportunities to develop those attachments and skills. However, haptic robots are not guaranteed suitable for every child, and individual differences exist in accepting these robots. In this study, we proposed that screening children's sensory reactivity can predict the suitable and challenging attributes for accepting these robots. Additionally, we investigated how sensory reactivity influences the tendency to anthropomorphize a haptic robot, as anthropomorphizing a robot is considered an indicator of accepting the robot. Sixty-seven preschool children aged 5–6 years participated. Results showed that the initial anthropomorphic tendency toward the robot was more likely to decrease with increasing atypicality in sensory reactivity, and haptic interaction with the robot tended to promote anthropomorphic tendency. A detailed analysis focusing on children's sensory insensitivity revealed polarized results: those actively seeking sensory information (i.e., <em>sensory seeking</em>) showed a lower anthropomorphic tendency toward the robot, whereas those who were passive (i.e., <em>low registration</em>) showed a higher anthropomorphic tendency. Importantly, haptic interaction with the robot mitigated the lower anthropomorphic tendency observed in sensory seekers. Finally, we found that the degree of anthropomorphizing the robot. positively influenced physiological arousal level. These results indicate that children with atypical sensory reactivity may accept robots through haptic interaction This extends previous research by demonstrating how individual sensory reactivity profiles modulate children's robot acceptance through physical interaction rather than visual observation alone. Future robots must be designed to interact in ways tailored to each child's sensory reactivity to develop stable attachment and social skills.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100186"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144828727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploring dimensions of perceived anthropomorphism in conversational AI: Implications for human identity threat and dehumanization","authors":"Yejin Lee , Sang-Hwan Kim","doi":"10.1016/j.chbah.2025.100192","DOIUrl":"10.1016/j.chbah.2025.100192","url":null,"abstract":"<div><div>This study aims to identify humanlike traits in conversational AI (CAI) that influence human identity threat and dehumanization, and to propose design guidelines that mitigate these effects. An online survey was conducted with 323 participants. Factor analysis revealed four key dimensions of perceived anthropomorphism in CAI: Self-likeness, Communication & Memory, Social Adaptability, and Agency. Structural equation modeling showed that Self-likeness heightened both perceived human identity threat and dehumanization, whereas Agency significantly moderated these effects while also directly mitigating dehumanization. Social Adaptability generally reduced perceived human identity threat but amplified it when combined with high Self-likeness. Furthermore, younger individuals were more likely to experience perceived human identity threat and dehumanization, underscoring the importance of considering user age. By elucidating the psychological structure underlying users’ perceptions of CAI anthropomorphism, this study deepens understanding of its psychosocial implications and provides practical guidance for the ethical design of CAI systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100192"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trusting emotional support from generative artificial intelligence: a conceptual review","authors":"Riccardo Volpato , Lisa DeBruine , Simone Stumpf","doi":"10.1016/j.chbah.2025.100195","DOIUrl":"10.1016/j.chbah.2025.100195","url":null,"abstract":"<div><div>People are increasingly using generative artificial intelligence (AI) for emotional support, creating trust-based interactions with limited predictability and transparency. We address the fragmented nature of research on trust in AI through a multidisciplinary conceptual review, examining theoretical foundations for understanding trust in the emerging context of emotional support from generative AI. Through an in-depth literature search across human-computer interaction, computer-mediated communication, social psychology, mental health, economics, sociology, philosophy, and science and technology studies, we developed two principal contributions. First, we summarise relevant definitions of trust across disciplines. Second, based on our first contribution, we define trust in the context of emotional support provided by AI and present a categorisation of relevant concepts that recur across well-established research areas. Our work equips researchers with a map for navigating the literature and formulating hypotheses about AI-based mental health support, as well as important theoretical, methodological, and practical implications for advancing research in this area.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100195"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatma Gizem Karaoglan Yilmaz , Ramazan Yilmaz , Ahmet Berk Ustun , Hatice Uzun
{"title":"Exploring the role of cognitive flexibility, digital competencies, and self-regulation skills on students' generative artificial intelligence anxiety","authors":"Fatma Gizem Karaoglan Yilmaz , Ramazan Yilmaz , Ahmet Berk Ustun , Hatice Uzun","doi":"10.1016/j.chbah.2025.100187","DOIUrl":"10.1016/j.chbah.2025.100187","url":null,"abstract":"<div><div>The purpose of the study is to examine the role of cognitive flexibility, digital competencies and self-regulation skills in reducing university students' artificial intelligence (AI) anxiety. The study proposes that it isn't possible to harness the potential benefits of AI technologies unless students' concerns about these technologies are suitably addressed. Although the number of potential benefits specifically related to the use of AI in education is enormous, addressing students' concerns about AI is essential to ensure the effective use of these technologies in education. The correlational survey model was used in this study. Four well-established instruments were employed to collect data from students who studied in different public university faculties in Turkey. Participants were selected from students who had been using AI tools for educational purposes for at least six months. The findings showed that cognitive flexibility, digital competencies and self-regulation skills impact on AI anxiety. Students with high cognitive flexibility had lower AI anxiety, while students with high digital competencies were better able to comprehend and use AI technologies. In addition, students with high self-regulation skills were able to manage their own learning processes more effectively and experienced less anxiety in using AI. As a result, increasing university students' digital competencies and self-regulation skills can be influential in reducing their AI anxiety. Accordingly, educational institutions could offer programs to develop students' digital competencies and AI literacy. These programs can help them adapt to AI technologies more easily and reduce their anxiety about these technologies by teaching students how to use AI technologies effectively and efficiently.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100187"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144841790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents” [Computers in Human Behavior: Artificial Humans (2024) 100030]","authors":"Theo Araujo , Nadine Bol","doi":"10.1016/j.chbah.2025.100177","DOIUrl":"10.1016/j.chbah.2025.100177","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100177"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CanvasHero: The role of artificial intelligence in cultivating resilience among children and youth using the 6-part story method in mass war trauma","authors":"Yuval Haber , Inbar Levkovich , Iftach Tzafrir , Karny Gigi , Dror Yinon , Dorit Hadar Shoval , Zohar Elyoseph","doi":"10.1016/j.chbah.2025.100196","DOIUrl":"10.1016/j.chbah.2025.100196","url":null,"abstract":"<div><h3>Background</h3><div>The potential of Generative Artificial Intelligence (GenAI) to promote mental health is of great interest. Specifically, there is growing interest in integrating applied GenAI into psychotherapy or into the teacher/parent-child relationship. This paper describes CanvasHero, a GenAI tool that was developed following the devastating attacks on Israel in October 2023. It aims to promote resilience in children and adolescents who were evacuated from their homes due to the war. CanvasHero serves as a proof of concept for integrating GenAI as an additional element that can enrich and deepen interpersonal interaction.</div></div><div><h3>Tool description</h3><div>CanvasHero utilizes the BASIC Ph model and 6-Part Story Method for assessing and bolstering coping skills, aided by the interactive scaffolding and synthetic abilities of the GenAI. Key stages comprise (1) collaborative narrative construction between child, meaningful adult, and the GAI; (2) analysis of resilience themes; and (3) generative visualization representing the child's story through DALL-E's imaging capabilities.</div></div><div><h3>Implementation protocol</h3><div>The CanvasHero is optimally designed for children ages 7–16 under adult supervision, with the HEART Checklist developed to structure this process. Sessions typically occur remotely via videoconference, or in person.</div></div><div><h3>Intended outcomes</h3><div>CanvasHero aims to create a playful space for processing stress and trauma, identifies resilience resources, and strengthens these capabilities. At the same time, risks in GenAI integration are mitigated via human oversight and an ethics-focused design.</div></div><div><h3>Conclusion</h3><div>CanvasHero exemplifies a GenAI application that can assist during wartime, serving as a psycho-educational mediator and facilitating an imaginative and playful space between children and meaningful adults. Further studies are required to evaluate effectiveness and potential risks.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100196"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144864602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anand P.A. van Zelderen , Sinuo Wu , Gergely Koszo , Jochen I. Menges
{"title":"When AI gets Personal: Employee emotional responses to anthropomorphic AI agents in a virtual workspace","authors":"Anand P.A. van Zelderen , Sinuo Wu , Gergely Koszo , Jochen I. Menges","doi":"10.1016/j.chbah.2025.100189","DOIUrl":"10.1016/j.chbah.2025.100189","url":null,"abstract":"<div><div>Understanding how AI influences employee emotions is becoming critical as organizations prepare for widespread AI agent deployment. While existing research has explored human-AI interactions in corporate settings, little is known about how employees emotionally navigate relationships with AI agents exhibiting distinct personality traits. This empirical study examines white-collar employees' emotional responses while interacting with three generative AI agents in a virtual workspace, revealing novel social dynamics enabled by AI technologies. Using qualitative methods and inductive analysis, our findings show that anthropomorphic AI agents evoke a broad spectrum of emotions, from <em>connection</em> and <em>contentment</em> to <em>amusement</em> and <em>frustration</em>, extending beyond those typically triggered by web-based AI agents. Notably, participants experienced new emotional subsets, including unique manifestations of <em>relational assurance</em> and <em>perceived worthlessness</em>, which introduces new emotional subcategories within established frameworks.</div><div>Moreover, the visual embodiment of AI agents in virtual workspaces significantly shapes user expectations and satisfaction. While a more human-like appearance can enhance engagement, it also introduces risks—a mismatch between an AI's visual representation and its actual behavior can heighten disappointment if the AI fails to meet human-like expectations. As organizations integrate AI agents into the workplace, our findings provide key insights for designing effective human-AI interactions. We emphasize the importance of human-centered design approaches that foster, rather than hinder, employee engagement, ensuring AI contributes positively to corporate environments.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100189"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144860476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emanuel Rojas, Debbie Hsu, Jingjing Huang, Mengyao Li
{"title":"Interpersonal influence matters: Trust contagion and repair in human-human-AI team","authors":"Emanuel Rojas, Debbie Hsu, Jingjing Huang, Mengyao Li","doi":"10.1016/j.chbah.2025.100194","DOIUrl":"10.1016/j.chbah.2025.100194","url":null,"abstract":"<div><div>As human-AI teams (HATs) become prevalent to enhance team performance, the interaction of multi-human-AI teams have been understudied, particularly how human interactions affect trust in AI teammates. This study investigated whether trust in AI can be contagious from human to human and whether this effect, named <em>trust contagion</em>, can be served as a trust repair strategy in multi-human-AI teams. Using a 2 (AI reliability: high and low, within-participants factor) × 3 (confederate trusting: trusting, neutral, distrusting, between-participants factor) mixed design, participants teamed up with a confederate and an AI teammate in a cooperative trust-based resource allocation game. Self-reported, behavioral, and conversational data were collected. We found that trust is contagious, yet positive and negative trust contagion effects were asymmetrical. While participants teamed with the trusting confederate used more positive words and showed high reliance and self-reported trust in the AI despite its errors, those teamed with the distrusting confederate showed only a significant decrease in reliance. Our results further show positive trust contagion can be used as a trust repair mechanism to mitigate trust drop after trust violations. Additionally, negative trust contagion showed modality-dependent effects, specifically in behavior. Positive trust contagion was advantageous when the AI is unreliable, while negative trust contagion was effective in decreasing reliance when the AI was performing well. Trust contagion was explained through interpersonal trust between participant and confederate mediated by confederate-trusting levels and trust in AI. Our research extends trust beyond dyadic interactions to convey trust is contagious from humans and can repair trust.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100194"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “Comparing ChatGPT with human judgements of social traits from face photographs” [Computers in Human Behavior: Artificial Humans 4C (2025) 100156]","authors":"Robin S.S. Kramer","doi":"10.1016/j.chbah.2025.100179","DOIUrl":"10.1016/j.chbah.2025.100179","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100179"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921760","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}