Timo Gnambs , Jan-Philipp Stein , Markus Appel , Florian Griese , Sabine Zinn
{"title":"An economical measure of attitudes towards artificial intelligence in work, healthcare, and education (ATTARI-WHE)","authors":"Timo Gnambs , Jan-Philipp Stein , Markus Appel , Florian Griese , Sabine Zinn","doi":"10.1016/j.chbah.2024.100106","DOIUrl":"10.1016/j.chbah.2024.100106","url":null,"abstract":"<div><div>Artificial intelligence (AI) has profoundly transformed numerous facets of both private and professional life. Understanding how people evaluate AI is crucial for predicting its future adoption and addressing potential barriers. However, existing instruments measuring attitudes towards AI often focus on specific technologies or cross-domain evaluations, while domain-specific measurement instruments are scarce. Therefore, this study introduces the nine-item <em>Attitudes towards Artificial Intelligence in Work, Healthcare, and Education</em> (ATTARI-WHE) scale. Using a diverse sample of <em>N</em> = 1083 respondents from Germany, the psychometric properties of the instrument were evaluated. The results demonstrated low rates of missing responses, minimal response biases, and a robust measurement model that was invariant across sex, age, education, and employment status. These findings support the use of the ATTARI-WHE to assess AI attitudes in the work, healthcare, and education domains, with three items each. Its brevity makes it particularly well-suited for use in social surveys, web-based studies, or longitudinal research where assessment time is limited.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"How do people react to political bias in generative artificial intelligence (AI)?","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100108","DOIUrl":"10.1016/j.chbah.2024.100108","url":null,"abstract":"<div><div>Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-content based on their pre-existing political beliefs and how this influences the acceptance of GAI. In three experiments (N = 513), it was found that perceived alignment between user's political orientation and bias in generated content (in text and images) increases acceptance and reliance on GAI. Participants who perceived alignment were more likely to grant GAI access to sensitive smartphone functions and to endorse the use in critical domains (e.g., loan approval; social media moderation). Because users see GAI as a social actor, they consider perceived alignment as a sign of greater objectivity, thus granting aligned GAI access to more sensitive areas.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Attributions of intent and moral responsibility to AI agents","authors":"Reem Ayad, Jason E. Plaks","doi":"10.1016/j.chbah.2024.100107","DOIUrl":"10.1016/j.chbah.2024.100107","url":null,"abstract":"<div><div>Moral transactions are increasingly infused with decision input from AI agents. To what extent do observers believe that AI agents are responsible for their own actions? How do these AI agents' socio-psychological features affect observers' judgment of them when they transgress? With full factorial, between-participant designs, we presented participants with vignettes in which an AI agent contributed to a negative outcome either intentionally or unintentionally. We independently manipulated four features of the agent's mind: its adherence to moral values, autonomy, emotional self-awareness, and social connectedness. In Study 1 (<em>N</em> = 2012), AI agents that intentionally contributed to a negative outcome consistently received harsher judgments than AI agents that contributed unintentionally. For unintentional actions, socially connected AI agents received less harsh judgments than socially disconnected AI agents. In Studies 2a-c (<em>N</em> = 1507), these judgments were explained by ratings of the socially connected AI agent's ‘mind’ as less distinct from the mind of its programmers (Study 2b) and that this kind of agent also possessed less free will (Study 2c). We discuss the implications of these findings in advancing the field's understanding of the moral psychology—and design—of AI agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Wang , Yaling Deng , Ge Wang , Tong Li , Hongjiang Xiao , Yuan Zhang
{"title":"The fluency-based semantic network of LLMs differs from humans","authors":"Ye Wang , Yaling Deng , Ge Wang , Tong Li , Hongjiang Xiao , Yuan Zhang","doi":"10.1016/j.chbah.2024.100103","DOIUrl":"10.1016/j.chbah.2024.100103","url":null,"abstract":"<div><div>Modern Large Language Models (LLMs) exhibit complexity and granularity similar to humans in the field of natural language processing, challenging the boundaries between humans and machines in language understanding and creativity. However, whether the semantic network of LLMs is similar to humans is still unclear. We examined the representative closed-source LLMs, GPT-3.5-Turbo and GPT-4, with open-source LLMs, LLaMA-2-70B, LLaMA-3-8B, LLaMA-3-70B using semantic fluency tasks widely used to study the structure of semantic networks in humans. To enhance the comparability of semantic networks between humans and LLMs, we innovatively employed role-playing to generate multiple agents, which is equivalent to recruiting multiple LLM participants. The results indicate that the semantic network of LLMs has poorer interconnectivity, local association organization, and flexibility compared to humans, which suggests that LLMs have lower search efficiency and more rigid thinking in the semantic space and may further affect their performance in creative writing and reasoning.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Social media influencer vs. virtual influencer: The mediating role of source credibility and authenticity in advertising effectiveness within AI influencer marketing","authors":"Donggyu Kim, Zituo Wang","doi":"10.1016/j.chbah.2024.100100","DOIUrl":"10.1016/j.chbah.2024.100100","url":null,"abstract":"<div><div>This study examines the differences between social media influencers and virtual influencers in influencer marketing, focusing on their impact on marketing effectiveness. Using a between-subjects experimental design, the research explores how human influencers (HIs), human-like virtual influencers (HVIs), and anime-like virtual influencers (AVIs) affect perceptions of authenticity, source credibility, and overall marketing effectiveness. The study evaluates these influencer types across both for-profit and not-for-profit messaging contexts to determine how message intent influences audience reactions. The findings reveal that HVIs can be as effective as human influencers, especially in not-for-profit messaging, where their authenticity and source credibility are higher. However, when the messaging shifts to for-profit motives, the advantage of HVIs diminishes, aligning more closely with AVIs, which consistently show lower effectiveness. The study highlights the critical role that both authenticity and source credibility play in mediating the relationship between the type of influencer and advertising effectiveness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres","authors":"Sophia C. Steinhaeusser, Birgit Lugrin","doi":"10.1016/j.chbah.2024.100085","DOIUrl":"10.1016/j.chbah.2024.100085","url":null,"abstract":"<div><p>Social robots as storytellers combine advantages of human storytellers – such as embodiment, gestures, and gaze – and audio books – large repertoire of voices, sound effects, and background music. However, research on adding non-speech sounds to robotic storytelling is yet in its infancy. The current series of four online studies investigates the influence of sound effects and background music in robotic storytelling on recipients’ storytelling experience and enjoyment, robot perception, and emotion induction across different story genres, i.e. horror, detective, romantic and humorous stories. Results indicate increased enjoyment for romantic stories and a trend for decreased fatigue for all genres when adding sound effects and background music to the robotic storytelling. Of the four genres examined, horror stories seem to benefit the most from the addition of non-speech sounds. Future research should provide guidelines for the selection of music and sound effects to improve the realization of non-speech sound-accompanied robotic storytelling. In conclusion, our ongoing research suggests that the integration of sound effects and background music holds promise for enhancing robotic storytelling, and our genre comparison provides first guidance of when to use them.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000458/pdfft?md5=39926971bcbec336bf3117e22eb44704&pid=1-s2.0-S2949882124000458-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"When own interest stands against the “greater good” – Decision randomization in ethical dilemmas of autonomous systems that involve their user’s self-interest","authors":"Anja Bodenschatz","doi":"10.1016/j.chbah.2024.100097","DOIUrl":"10.1016/j.chbah.2024.100097","url":null,"abstract":"<div><div>Autonomous systems (ASs) decide upon ethical dilemmas and their artificial intelligence as well as situational settings become more and more complex. However, to study common-sense morality concerning ASs abstracted dilemmas on autonomous vehicle (AV) accidents are a common tool. A special case of ethical dilemmas is when the AS’s users are affected. Many people want AVs to adhere to utilitarian programming (e.g., to save the larger group), or egalitarian programming (i.e., to treat every person equally). However, they want their own AV to protect them instead of the “greater good”. That people reject utilitarian programming as an AS’s user while supporting the idea from an impartial perspective has been termed the “social dilemma of AVs”. Meanwhile, preferences for another technical capability, which would implement egalitarian programming, have not been elicited for dilemmas involving self-interest: decision randomization. This paper investigates normative and descriptive preferences for a self-protective, self-sacrificial, or randomized choice by an AS in a dilemma where people are the sole passenger of an AV, and their survival stands against the survival of several others. Results suggest that randomization may mitigate the “social dilemma of AVs” by bridging between a societally accepted programming and the urge of ASs’ users for self-protection.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100097"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142578719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Integrating generative AI in data science programming: Group differences in hint requests","authors":"Tenzin Doleck, Pedram Agand, Dylan Pirrotta","doi":"10.1016/j.chbah.2024.100089","DOIUrl":"10.1016/j.chbah.2024.100089","url":null,"abstract":"<div><p>Generative AI applications have increasingly gained visibility in recent educational literature. Yet less is known about how access to generative tools, such as ChatGPT, influences help-seeking during complex problem-solving. In this paper, we aim to advance the understanding of learners' use of a support strategy (hints) when solving data science programming tasks in an online AI-enabled learning environment. The study compared two conditions: students solving problems in <em>DaTu</em> with AI assistance (<em>N</em> = 45) and those without AI assistance (<em>N</em> = 44). Findings reveal no difference in hint-seeking behavior between the two groups, suggesting that the integration of AI assistance has minimal impact on how individuals seek help. The findings also suggest that the availability of AI assistance does not necessarily reduce learners’ reliance on support strategies (such as hints). The current study advances data science education and research by exploring the influence of AI assistance during complex data science problem-solving. We discuss implications and identify paths for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100089"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000495/pdfft?md5=d2364f734cd75435ea2c327fb376b30e&pid=1-s2.0-S2949882124000495-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142230120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI as decision aid or delegated agent: The effects of trust dimensions on the adoption of AI digital agents","authors":"Aman Pathak, Veena Bansal","doi":"10.1016/j.chbah.2024.100094","DOIUrl":"10.1016/j.chbah.2024.100094","url":null,"abstract":"<div><div>AI digital agents may act as decision-aid or as delegated agents. A decision-aid agent helps a user make decisions, whereas a delegated agent makes decisions on behalf of the consumer. The study determines the factors affecting the adoption intention of AI digital agents as decision aids and delegated agents. The domain of study is banking, financial services, and Insurance sector (BFSI). Due to the unique characteristics of AI digital agents, trust has been identified as an important construct in the extant literature. The study decomposed trust into social, cognitive, and affective trust. We incorporated PLS-SEM and fsQCA to examine the factors drawn from the literature. The findings from PLS-SEM suggest that perceived AI quality affects cognitive trust, perceived usefulness affects affective trust, and social trust affects cognitive and affective trust. The intention to adopt AI as a decision-aid is influenced by affective and cognitive trust. The intention to adopt AI as delegated agents is influenced by social, cognitive, and affective trust. FsQCA findings indicate that combining AI quality, perceived usefulness, and trust (social, cognitive, and affective) best explains the intention to adopt AI as a decision aid and delegated agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100094"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142426883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Behavioral and neural evidence for the underestimated attractiveness of faces synthesized using an artificial neural network","authors":"Satoshi Nishida","doi":"10.1016/j.chbah.2024.100104","DOIUrl":"10.1016/j.chbah.2024.100104","url":null,"abstract":"<div><div>Recent advancements in artificial intelligence (AI) have not eased human anxiety about AI. If such anxiety diminishes human preference for AI-synthesized visual information, the preference should be reduced solely by the belief that the information is synthesized by AI, independently of its appearance. This study tested this hypothesis by asking experimental participants to rate the attractiveness of faces synthesized by an artificial neural network, under the false instruction that some faces were real and others were synthetic. This experimental design isolated the impact of belief on attractiveness ratings from the actual facial appearance. Brain responses were also recorded with fMRI to examine the neural basis of this belief effect. The results showed that participants rated faces significantly lower when they believed them to be synthetic, and this belief altered the responsiveness of fMRI signals to facial attractiveness in the right fusiform cortex. These findings support the notion that human preference for visual information is reduced solely due to the belief that the information is synthesized by AI, suggesting that AI and robot design should focus not only on enhancing appearance but also on alleviating human anxiety about them.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100104"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}