{"title":"Can robots elicit empathy? The effects of social robots’ appearance on emotional contagion","authors":"Wenjing Yang, Yunhui Xie","doi":"10.1016/j.chbah.2024.100049","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100049","url":null,"abstract":"<div><p>The increasing integration of robots as service providers or companions in daily life has prompted extensive research on the emotional aspects of human-robot interaction (HRI). Emotional contagion, a crucial factor influencing user experience in HRI, has gained considerable attention. However, limited research explores the influence of anthropomorphism and gender differences in robot modeling on emotional contagion in HRI, leaving a gap in reference guidance for social robot design and application. To address this, we investigate human-robot interactions in service scenarios, analyzing the impact of robot appearance, anthropomorphism, and gender on the transmission of positive and negative emotions. The experimental findings highlight the significant role of robot gender in influencing emotional contagion in HRI, while revealing the interaction effect between robot gender and anthropomorphism on emotional contagion. This interdisciplinary study provides empirical evidence that enriches research on the emotional aspects of HRI and contributes to more informed design of user experiences.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100049"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000094/pdfft?md5=f4230e3a212d9571f4324eaee8bb161a&pid=1-s2.0-S2949882124000094-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139733175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Autumn Edwards , Chad Edwards , Leopoldina Fortunati , Anna Maria Manganelli , Federico de Luca
{"title":"Mass robotics: How do people communicate with, use, and feel about Alexa? A cross-cultural, user perspective","authors":"Autumn Edwards , Chad Edwards , Leopoldina Fortunati , Anna Maria Manganelli , Federico de Luca","doi":"10.1016/j.chbah.2024.100060","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100060","url":null,"abstract":"<div><p>For the first time, voice-based assistants (VBAs) allow studying the mass consumption of a robotic product in a ‘natural’ environment. The present paper investigates users' perspectives concerning Alexa, informed by the scholarly literature on the diffusion and appropriation of digital media, VBAs, and social robots, in general, in the domestic sphere. Besides CASA, a paradigm widely applied in robotics, we draw on <em>theories on users</em> and <em>theories on the use of users.</em> We explored individuals' use of Alexa in two countries—the US and Italy— through an online survey. Results indicated that: (1) Alexa's use follows the same modalities of the previous digital media; (2) Alexa's use is in an early phase of domestication, and thus, the role of users in reshaping this technological artifact is still limited; and (3) updates to the CASA paradigm are needed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100060"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000203/pdfft?md5=81ef3750e700119c60e1e02b92e11d69&pid=1-s2.0-S2949882124000203-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140014587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mixed reality videography. Analyzing joint behavior of human-agent- interactions in extended realities","authors":"Jonathan Harth","doi":"10.1016/j.chbah.2024.100063","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100063","url":null,"abstract":"<div><p>Current research in human-agent interaction increasingly focuses on multimodal interactions with anthropomorphic virtual agents. However, most existing paradigms primarily emphasize only the user's perception, overlooking the actual emergent interaction processes. This paper addresses this gap by presenting a novel methodological approach that scrutinizes both the relationship and content levels of human-agent interaction. Utilizing a unique combination of mixed reality representations and mixed methods, our approach aims to uncover and analyze the joint behavior of human users and agents during interactions. Our approach offers new insights into the dynamics of proto-social human-agent interactions, with implications for improving the design and functionality of virtual agents in mixed reality settings.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100063"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000239/pdfft?md5=f1be36c6bb971ddd69dfe7374b25b60c&pid=1-s2.0-S2949882124000239-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Domain-general and -specific individual difference predictors of an uncanny valley and uncanniness effects","authors":"Alexander Diel , Michael Lewis","doi":"10.1016/j.chbah.2024.100041","DOIUrl":"10.1016/j.chbah.2024.100041","url":null,"abstract":"<div><p>Near humanlike artificial entities can appear eerie or uncanny. This <em>uncanny valley</em> is here investigated by testing five individual difference measures as predictors of uncanniness throughout a variety of stimuli. Coulrophobia predicted uncanniness of distorted faces, bodies, and androids and clowns; disgust sensitivity predicted the uncanniness of some distorted faces; the anxiety facet of neuroticism predicted the uncanniness of some distorted faces, bodies, and voices; deviancy aversion and need for structure predicted uncanniness of distorted places and voices. Taken together, the results suggest that while uncanniness can be caused by multiple, domain-independent (e.g., deviancy aversion) and domain-specific (e.g., disease avoidance) mechanisms, the uncanniness of androids specifically may be related to a fear of clowns, potentially due to a dislike of exaggerated human proportions.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100041"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400001X/pdfft?md5=ec2492ac22f09dff178c148ffbc1d1d7&pid=1-s2.0-S294988212400001X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139392756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conveying chatbot personality through conversational cues in social media messages","authors":"Holger Heppner , Birte Schiffhauer , Udo Seelmeyer","doi":"10.1016/j.chbah.2024.100044","DOIUrl":"10.1016/j.chbah.2024.100044","url":null,"abstract":"<div><p>A perceived personality of a chatbot or conversational agent is mainly conveyed by the way they communicate verbally. In this online vignette study (N = 168) we examined the possibility of conveying personality in short social-media-like messages by adding simple conversational cues. Social-oriented and responsive conversational cues, as well as their combination had distinct effects on the perceived personalities of the chatbots. Social-oriented cues had a clear effect on most OCEAN personality traits, warmth, and anthropomorphism, while responsive cues only affected neuroticism. In combination, effects of social-oriented cues were countered by responsive cues, but not for all personality traits. Competence and trust were not affected by any of the used conversational cues. The findings show that very few conversational cues are sufficient to convey distinct personalities in short messages.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100044"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000045/pdfft?md5=83d78cf0fd6d64ec8945760e1207a8f0&pid=1-s2.0-S2949882124000045-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139638423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joni Salminen , João M. Santos , Soon-gyo Jung , Bernard J. Jansen
{"title":"Picturing the fictitious person: An exploratory study on the effect of images on user perceptions of AI-generated personas","authors":"Joni Salminen , João M. Santos , Soon-gyo Jung , Bernard J. Jansen","doi":"10.1016/j.chbah.2024.100052","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100052","url":null,"abstract":"<div><p>Human-computer interaction (HCI) research is facing a vital question of the effectiveness of personas generated using artificial intelligence (AI). Addressing this question, this research explores user perceptions of AI-generated personas for textual content (GPT-4) and two image generation models (DALL-E and Midjourney). We evaluate whether the inclusion of images in AI-generated personas impacts user perception or if AI text descriptions alone suffice to create good personas. Recruiting 216 participants, we compare three AI-generated personas without images and those with either DALL-E or Midjourney-created images. Contrary to expectations from persona literature, the presence of images in AI-generated personas did not significantly impact user perceptions. Rather, the participants generally perceived AI-generated personas to be of good quality regardless of the inclusion of images. These findings suggest that textual content, i.e., the persona narrative, is the primary driver of user perceptions in AI-generated personas. Our findings contribute to the ongoing AI-HCI discourse and provide recommendations for designing AI-generated personas.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100052"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000124/pdfft?md5=a9f6bf5d9073bb889a4d2d35ca13bfd0&pid=1-s2.0-S2949882124000124-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139986457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human-in-the-loop in artificial intelligence in education: A review and entity-relationship (ER) analysis","authors":"Bahar Memarian, Tenzin Doleck","doi":"10.1016/j.chbah.2024.100053","DOIUrl":"10.1016/j.chbah.2024.100053","url":null,"abstract":"<div><h3>Background</h3><p>Human-in-the-loop research predominantly examines the interaction types and effects. A more structural and pragmatic exploration of humans and Artificial Intelligence or AI is lacking in the AI in education literature.</p></div><div><h3>Purpose</h3><p>In this systematic review, we follow the Entity-Relationship (ER) framework to identify trends in the entities, relationships, and attributes of human-in-the-loop AI in education.</p></div><div><h3>Methods</h3><p>An overview of <em>N</em> = 28 reviewed studies followed by their ER characteristics are summarized and analyzed.</p></div><div><h3>Results</h3><p>The dominant number of two or three-entity studies, one-sided relationships, few attributes, and many to many cardinalities may signal a lack of deliberation on beings that come to interact and influence human-in-the-loop and AI in education.</p></div><div><h3>Conclusion</h3><p>The contribution of this work is identifying the implications of human-in-the-loop and AI from a more formal ER perspective and acknowledging the many possibilities for placement of humans in the loop with the AI, system, and environment of interest.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100053"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000136/pdfft?md5=56381ca79ed57fe8c61050728815635d&pid=1-s2.0-S2949882124000136-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139830799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mariam Karhiy , Mark Sagar , Michael Antoni , Kate Loveys , Elizabeth Broadbent
{"title":"Can a virtual human increase mindfulness and reduce stress? A randomised trial","authors":"Mariam Karhiy , Mark Sagar , Michael Antoni , Kate Loveys , Elizabeth Broadbent","doi":"10.1016/j.chbah.2024.100069","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100069","url":null,"abstract":"<div><h3>Background</h3><p>Stress is a significant issue amongst university students, yet limited psychological services are available. Mindfulness is effective for stress reduction and can be delivered digitally to expand access to student populations. However, digital interventions often suffer from low engagement and poor adherence. A virtual human may improve engagement and adherence through its humanlike appearance and behaviours.</p></div><div><h3>Objective</h3><p>To examine whether a virtual human could reduce stress in university students at least as much as a teletherapist, and more than a chatbot, using a mindfulness intervention.</p></div><div><h3>Methods</h3><p>Stressed university students (N = 158) were randomly allocated to the virtual human (N = 54), chatbot (N = 54), or teletherapist (N = 50). 36 participants received each condition. Participants completed one lab session and were asked to do online homework sessions at least twice weekly for four weeks. Changes in self-reported stress and mindfulness, physiological stress indices, homework completion, and perceptions of the agent were compared between groups. Thematic analysis was conducted on participants’ responses to open-ended questions about the interventions.</p></div><div><h3>Results</h3><p>There were significant reductions in stress and increases in mindfulness across all groups. All groups had higher peripheral skin temperature post-intervention, and only the teletherapy group had higher electrodermal activity (reflecting elevated stress) post-intervention compared to baseline. There were no significant changes in heart rate. Homework adherence was significantly higher in the virtual human group, whereas homework satisfaction and engagement were lowest in the chatbot group. Thematic analysis found that people thought the robotic voice of the virtual human could be improved, the chatbot could be improved by adding audio, and that participants experienced feelings of judgement from the teletherapist.</p></div><div><h3>Discussion</h3><p>Overall, results support use of virtual humans for delivering mindfulness interventions in stressed students. Virtual humans may have the advantage over teletherapy and chatbots of increasing adherence in student populations, but more work is needed to increase perceived empathy and replicate results in other populations.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100069"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400029X/pdfft?md5=74242b19d6c2fa1244faa14ce39bc34e&pid=1-s2.0-S294988212400029X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140348009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthias F.C. Hudecek , Eva Lermer , Susanne Gaube , Julia Cecil , Silke F. Heiss , Falk Batz
{"title":"Fine for others but not for me: The role of perspective in patients’ perception of artificial intelligence in online medical platforms","authors":"Matthias F.C. Hudecek , Eva Lermer , Susanne Gaube , Julia Cecil , Silke F. Heiss , Falk Batz","doi":"10.1016/j.chbah.2024.100046","DOIUrl":"10.1016/j.chbah.2024.100046","url":null,"abstract":"<div><p>In the near future, online medical platforms enabled by artificial intelligence (AI) technology will become increasingly more prevalent, allowing patients to use them directly without having to consult a human doctor. However, there is still little research from the patient's perspective on such AI-enabled tools. We, therefore, conducted a preregistered 2x3 between-subjects experiment (<em>N</em> = 266) to examine the influence of <em>perspective</em> (oneself vs. average person) and <em>source of advice</em> (AI vs. male physician vs. female physician) on the perception of a medical diagnosis and corresponding treatment recommendations. Results of robust ANOVAs showed a statistically significant interaction between the source of advice and perspective for all three dependent variables (i.e., evaluation of the diagnosis, evaluation of the treatment recommendation, and risk perception). People prefer the advice of human doctors to an AI when it comes to their own situation. In contrast, the participants made no differences between the sources of medical advice when it comes to assessing the situation of an average person. Our study contributes to a better understanding of the patient's perspective of modern digital health technology. As our findings suggest the perception of AI-enabled diagnostic tools is more critical when it comes to oneself, future research should examine the relevant factors that influence this perception.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100046"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000069/pdfft?md5=2fcb09cbbee613acb0eb286cb234004f&pid=1-s2.0-S2949882124000069-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yannick Fernholz , Tatiana Ermakova , B. Fabian , P. Buxmann
{"title":"User-driven prioritization of ethical principles for artificial intelligence systems","authors":"Yannick Fernholz , Tatiana Ermakova , B. Fabian , P. Buxmann","doi":"10.1016/j.chbah.2024.100055","DOIUrl":"10.1016/j.chbah.2024.100055","url":null,"abstract":"<div><p>Despite the progress of Artificial Intelligence (AI) and its contribution to the advancement of human society, the prioritization of ethical principles from the viewpoint of its users has not yet received much attention and empirical investigations. This is important to develop appropriate safeguards and increase the acceptance of AI-mediated technologies among all members of society.</p><p>In this research, we collected, integrated, and prioritized ethical principles for AI systems with respect to their relevance in different real-life application scenarios.</p><p>First, an overview of ethical principles for AI was systematically derived from various academic and non-academic sources. Our results clearly show that transparency, justice and fairness, non-maleficence, responsibility, and privacy are most frequently mentioned in this corpus of documents.</p><p>Next, an empirical survey to systematically identify users’ priorities was designed and conducted in the context of selected scenarios: AI-mediated recruitment (human resources), predictive policing, autonomous vehicles, and hospital robots.</p><p>We anticipate that the resulting ranking can serve as a valuable basis for formulating requirements for AI-mediated solutions and creating AI algorithms that prioritize user's needs. Our target audience includes everyone who will be affected by AI systems, e.g., policy makers, algorithm developers, and system managers as our ranking clearly depicts user's awareness regarding AI ethics.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100055"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400015X/pdfft?md5=911f54e1aba722dbdf8fcef066dde5e5&pid=1-s2.0-S294988212400015X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139889572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}