{"title":"Attributions of intent and moral responsibility to AI agents","authors":"Reem Ayad, Jason E. Plaks","doi":"10.1016/j.chbah.2024.100107","DOIUrl":"10.1016/j.chbah.2024.100107","url":null,"abstract":"<div><div>Moral transactions are increasingly infused with decision input from AI agents. To what extent do observers believe that AI agents are responsible for their own actions? How do these AI agents' socio-psychological features affect observers' judgment of them when they transgress? With full factorial, between-participant designs, we presented participants with vignettes in which an AI agent contributed to a negative outcome either intentionally or unintentionally. We independently manipulated four features of the agent's mind: its adherence to moral values, autonomy, emotional self-awareness, and social connectedness. In Study 1 (<em>N</em> = 2012), AI agents that intentionally contributed to a negative outcome consistently received harsher judgments than AI agents that contributed unintentionally. For unintentional actions, socially connected AI agents received less harsh judgments than socially disconnected AI agents. In Studies 2a-c (<em>N</em> = 1507), these judgments were explained by ratings of the socially connected AI agent's ‘mind’ as less distinct from the mind of its programmers (Study 2b) and that this kind of agent also possessed less free will (Study 2c). We discuss the implications of these findings in advancing the field's understanding of the moral psychology—and design—of AI agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Wang , Yaling Deng , Ge Wang , Tong Li , Hongjiang Xiao , Yuan Zhang
{"title":"The fluency-based semantic network of LLMs differs from humans","authors":"Ye Wang , Yaling Deng , Ge Wang , Tong Li , Hongjiang Xiao , Yuan Zhang","doi":"10.1016/j.chbah.2024.100103","DOIUrl":"10.1016/j.chbah.2024.100103","url":null,"abstract":"<div><div>Modern Large Language Models (LLMs) exhibit complexity and granularity similar to humans in the field of natural language processing, challenging the boundaries between humans and machines in language understanding and creativity. However, whether the semantic network of LLMs is similar to humans is still unclear. We examined the representative closed-source LLMs, GPT-3.5-Turbo and GPT-4, with open-source LLMs, LLaMA-2-70B, LLaMA-3-8B, LLaMA-3-70B using semantic fluency tasks widely used to study the structure of semantic networks in humans. To enhance the comparability of semantic networks between humans and LLMs, we innovatively employed role-playing to generate multiple agents, which is equivalent to recruiting multiple LLM participants. The results indicate that the semantic network of LLMs has poorer interconnectivity, local association organization, and flexibility compared to humans, which suggests that LLMs have lower search efficiency and more rigid thinking in the semantic space and may further affect their performance in creative writing and reasoning.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social media influencer vs. virtual influencer: The mediating role of source credibility and authenticity in advertising effectiveness within AI influencer marketing","authors":"Donggyu Kim, Zituo Wang","doi":"10.1016/j.chbah.2024.100100","DOIUrl":"10.1016/j.chbah.2024.100100","url":null,"abstract":"<div><div>This study examines the differences between social media influencers and virtual influencers in influencer marketing, focusing on their impact on marketing effectiveness. Using a between-subjects experimental design, the research explores how human influencers (HIs), human-like virtual influencers (HVIs), and anime-like virtual influencers (AVIs) affect perceptions of authenticity, source credibility, and overall marketing effectiveness. The study evaluates these influencer types across both for-profit and not-for-profit messaging contexts to determine how message intent influences audience reactions. The findings reveal that HVIs can be as effective as human influencers, especially in not-for-profit messaging, where their authenticity and source credibility are higher. However, when the messaging shifts to for-profit motives, the advantage of HVIs diminishes, aligning more closely with AVIs, which consistently show lower effectiveness. The study highlights the critical role that both authenticity and source credibility play in mediating the relationship between the type of influencer and advertising effectiveness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres","authors":"Sophia C. Steinhaeusser, Birgit Lugrin","doi":"10.1016/j.chbah.2024.100085","DOIUrl":"10.1016/j.chbah.2024.100085","url":null,"abstract":"<div><p>Social robots as storytellers combine advantages of human storytellers – such as embodiment, gestures, and gaze – and audio books – large repertoire of voices, sound effects, and background music. However, research on adding non-speech sounds to robotic storytelling is yet in its infancy. The current series of four online studies investigates the influence of sound effects and background music in robotic storytelling on recipients’ storytelling experience and enjoyment, robot perception, and emotion induction across different story genres, i.e. horror, detective, romantic and humorous stories. Results indicate increased enjoyment for romantic stories and a trend for decreased fatigue for all genres when adding sound effects and background music to the robotic storytelling. Of the four genres examined, horror stories seem to benefit the most from the addition of non-speech sounds. Future research should provide guidelines for the selection of music and sound effects to improve the realization of non-speech sound-accompanied robotic storytelling. In conclusion, our ongoing research suggests that the integration of sound effects and background music holds promise for enhancing robotic storytelling, and our genre comparison provides first guidance of when to use them.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000458/pdfft?md5=39926971bcbec336bf3117e22eb44704&pid=1-s2.0-S2949882124000458-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest","authors":"Anja Bodenschatz","doi":"10.1016/j.chbah.2024.100097","DOIUrl":"10.1016/j.chbah.2024.100097","url":null,"abstract":"<div><div>Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100097"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating generative AI in data science programming: Group differences in hint requests","authors":"Tenzin Doleck, Pedram Agand, Dylan Pirrotta","doi":"10.1016/j.chbah.2024.100089","DOIUrl":"10.1016/j.chbah.2024.100089","url":null,"abstract":"<div><p>Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in <em>DaTu</em> with AI assistance (<em>N</em> = 45) and those without AI assistance (<em>N</em> = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000495/pdfft?md5=d2364f734cd75435ea2c327fb376b30e&pid=1-s2.0-S2949882124000495-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents","authors":"Aman Pathak, Veena Bansal","doi":"10.1016/j.chbah.2024.100094","DOIUrl":"10.1016/j.chbah.2024.100094","url":null,"abstract":"<div><div>AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network","authors":"Satoshi Nishida","doi":"10.1016/j.chbah.2024.100104","DOIUrl":"10.1016/j.chbah.2024.100104","url":null,"abstract":"<div><div>Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Westby , Richard J. Radke , Christoph Riedl , Brook Foucault Welles
{"title":"How voice and helpfulness shape perceptions in human–agent teams","authors":"Samuel Westby , Richard J. Radke , Christoph Riedl , Brook Foucault Welles","doi":"10.1016/j.chbah.2024.100101","DOIUrl":"10.1016/j.chbah.2024.100101","url":null,"abstract":"<div><div>Voice assistants are increasingly prevalent, from personal devices to team environments. This study explores how voice type and contribution quality influence human–agent team performance and perceptions of anthropomorphism, animacy, intelligence, and trustworthiness. By manipulating both, we reveal mechanisms of perception and clarify ambiguity in previous work. Our results show that the human resemblance of a voice assistant’s voice negatively interacts with the helpfulness of an agent’s contribution to flip its effect on perceived anthropomorphism and perceived animacy. This means human teammates interpret the agent’s contributions differently depending on its voice. Our study found no significant effect of voice on perceived intelligence, trustworthiness, or team performance. We find differences in these measures are caused by manipulating the helpfulness of an agent. These findings suggest that function matters more than form when designing agents for high-performing human–agent teams, but controlling perceptions of anthropomorphism and animacy can be unpredictable even with high human resemblance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100101"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emmanuele Tidoni , Emily S. Cross , Richard Ramsey , Michele Scandola
{"title":"Are humanoid robots perceived as mindless mannequins?","authors":"Emmanuele Tidoni , Emily S. Cross , Richard Ramsey , Michele Scandola","doi":"10.1016/j.chbah.2024.100105","DOIUrl":"10.1016/j.chbah.2024.100105","url":null,"abstract":"<div><div>The shape and texture of humans and humanoid robots provide perceptual information that help us to appropriately categorise these stimuli. However, it remains unclear which features and attributes are driving the assignment into human and non-human categories. To explore this issue, we ran a series of five preregistered experiments wherein we presented stimuli that varied in their appearance (i.e., humans, humanoid robots, non-human primates, mannequins, hammers, musical instruments) and asked participants to complete a match-to-category task (Experiments 1-2-3), a priming task (Experiment 4), or to rate each category along four dimensions (i.e., similarity, liveliness, body association, action association; Experiment 5). Results indicate that categorising human bodies and humanoid robots requires the integration of both the analyses of their physical shape and visual texture (i.e., to identify a humanoid robot we cannot only rely on its visual shape). Further, our behavioural findings suggest that human bodies may be represented as a special living category separate from non-human animal entities (i.e., primates). Moreover, results also suggest that categorising humans and humanoid robots may rely on a network of information typically associated to human being and inanimate objects respectively (e.g., humans can play musical instruments and have a mind while robots do not play musical instruments and do have not a human mind). Overall, the paradigms introduced here offer new avenues through which to study the perception of human and artificial agents, and how experiences with humanoid robots may change the perception of humanness along a robot—human continuum.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100105"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142700897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}