Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Physical anthropomorphism (but not gender presentation) influences trust in household robots
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-10 DOI: 10.1016/j.chbah.2024.100114
Colin Holbrook , Umesh Krishnamurthy , Paul P. Maglio , Alan R. Wagner
{"title":"Physical anthropomorphism (but not gender presentation) influences trust in household robots","authors":"Colin Holbrook ,&nbsp;Umesh Krishnamurthy ,&nbsp;Paul P. Maglio ,&nbsp;Alan R. Wagner","doi":"10.1016/j.chbah.2024.100114","DOIUrl":"10.1016/j.chbah.2024.100114","url":null,"abstract":"<div><div>This research explores anthropomorphism and gender presentation as prospective determinants of trust in household service robots with respect to care of objects (e.g., clothing, valuables), information (e.g., online passwords, credit card numbers), and living agents (e.g., pets, children). In Experiments 1 and 2, we compared trust in a humanoid robot presenting as male, female, or gender-neutral, finding no effects of gender presentation on any trust outcome. In Experiment 3, a fourth condition depicting a physically nonhumanoid robot was added. Relative to the humanoid conditions, participants reported less willingness to trust the nonhumanoid robot to care for their objects, personal information, or vulnerable agents; the reduced trust in care for objects or information was mediated by appraisals of the nonhumanoid as less intelligent and less likable, whereas the reduced trust in care of agents was mediated by appraisals of the nonhumanoid as less likable and less alive. In a parallel pattern, across all studies, participants’ appraisals of robots as intelligent tracked trust in them to take care of objects or information (but not agents), whereas appraisals of robots as likable and alive tracked trust in care of agents. The results are discussed as they inform past work examining effects of gender presentation and anthropomorphism on perceptions of, and trust in, robots.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100114"},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust and acceptance of AI caregiving robots: The role of ethics and self-efficacy
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-07 DOI: 10.1016/j.chbah.2024.100115
Cathy S. Lin, Ying-Feng Kuo, Ting-Yu Wang
{"title":"Trust and acceptance of AI caregiving robots: The role of ethics and self-efficacy","authors":"Cathy S. Lin,&nbsp;Ying-Feng Kuo,&nbsp;Ting-Yu Wang","doi":"10.1016/j.chbah.2024.100115","DOIUrl":"10.1016/j.chbah.2024.100115","url":null,"abstract":"<div><div>As AI technology rapidly advances, ethical concerns have emerged as a global focus. This study introduces a second-order scale for analyzing AI ethics and proposes a model to examine the intention to use AI caregiving robots. The model incorporates elements from the Unified Theory of Acceptance and Use of Technology (UTAUT)—including social influence and performance expectancy—alongside AI ethics, self-efficacy, and trust in AI. The findings reveal that AI ethics and social influence enhance self-efficacy, which in turn increases trust in AI, performance expectancy, and the intention to use AI caregiving robots. Moreover, trust in AI and performance expectancy directly and positively influence the intention to adopt these robots. By incorporating AI ethics, the model provides a more comprehensive perspective, addressing dimensions often overlooked in conventional models. The proposed model is validated across diverse samples, demonstrating both its theoretical and practical significance in predicting AI usage intentions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100115"},"PeriodicalIF":0.0,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143154567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the technology acceptance model
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-03 DOI: 10.1016/j.chbah.2024.100113
Anne-Kathrin Kleine , Insa Schaffernak , Eva Lermer
{"title":"Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the technology acceptance model","authors":"Anne-Kathrin Kleine ,&nbsp;Insa Schaffernak ,&nbsp;Eva Lermer","doi":"10.1016/j.chbah.2024.100113","DOIUrl":"10.1016/j.chbah.2024.100113","url":null,"abstract":"<div><div>The current research investigated the factors associated with the intensity of AI chatbot usage among university students, applying the Technology Acceptance Model (TAM) and its extended version, TAM3. A daily diary study over five days was conducted among university students, distinguishing between inter-individual (between-person) and intra-individual (within-person) variations. Multilevel structural equation modeling (SEM) was used to analyze the data. In Study 1 (<em>N</em> = 72), results indicated that AI chatbot anxiety was associated with perceived ease of use (PEOU) and perceived usefulness (PU), which serially mediated the link with AI chatbot usage intensity. Study 2 (<em>N</em> = 153) supported these findings and further explored the roles of facilitating conditions and subjective norm as additional predictors of PEOU and PU. Results from both studies demonstrated that, at the between-person level, students with higher average levels of PEOU and PU reported more intensive AI chatbot usage. In Study 1, the relationship between PEOU and usage intensity was mediated through PU at the within-person level, while the mediation model was not supported in Study 2. Post-hoc comparisons highlighted much higher variability in PEOU and PU in Study 1 compared to Study 2. The results have practical implications for enhancing AI chatbot adoption in educational settings. Emphasizing user-friendly interfaces, reducing AI-related anxiety, providing robust technical support, and leveraging peer influence may enhance the usage intensity of AI chatbots. This study underscores the necessity of considering both stable individual differences and dynamic daily influences to better understand AI chatbot usage patterns among students.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Preventing promotion-focused goals: The impact of regulatory focus on responsible AI
Computers in Human Behavior: Artificial Humans Pub Date : 2024-12-03 DOI: 10.1016/j.chbah.2024.100112
Samuel N. Kirshner, Jessica Lawson
{"title":"Preventing promotion-focused goals: The impact of regulatory focus on responsible AI","authors":"Samuel N. Kirshner,&nbsp;Jessica Lawson","doi":"10.1016/j.chbah.2024.100112","DOIUrl":"10.1016/j.chbah.2024.100112","url":null,"abstract":"<div><div>Implementing black-box artificial intelligence (AI) often requires evaluating trade-offs related to responsible AI (RAI) (e.g., the trade-off between performance and features regarding AI's fairness or explainability). Synthesizing theories on regulatory focus and cognitive dissonance, we develop and test a model describing how organizational goals impact the dynamics of AI-based unethical pro-organizational behavior (UPB). First, we show that promotion-focused goals increase AI-based UPB and that RAI values act as a novel mediator. Promotion-focus goals significantly lower fairness in Study 1A and explainability in Study 1B, mediating the relationship between regulatory focus and AI-based UPB. Study 2A further supports RAI values as the driving mechanism of AI-based UPB using a moderation-by-processes design experiment. Study 2B provides evidence that AI-based UPB decisions can, in turn, lead to more unethical RAI values for promotion-focused firms, creating a negative RAI feedback loop within organizations. Our research provides theoretical implications and actionable insights for researchers, organizations, and policymakers seeking to improve the responsible use of AI.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An economical measure of attitudes towards artificial intelligence in work, healthcare, and education (ATTARI-WHE)
Computers in Human Behavior: Artificial Humans Pub Date : 2024-11-28 DOI: 10.1016/j.chbah.2024.100106
Timo Gnambs , Jan-Philipp Stein , Markus Appel , Florian Griese , Sabine Zinn
{"title":"An economical measure of attitudes towards artificial intelligence in work, healthcare, and education (ATTARI-WHE)","authors":"Timo Gnambs ,&nbsp;Jan-Philipp Stein ,&nbsp;Markus Appel ,&nbsp;Florian Griese ,&nbsp;Sabine Zinn","doi":"10.1016/j.chbah.2024.100106","DOIUrl":"10.1016/j.chbah.2024.100106","url":null,"abstract":"<div><div>Artificial intelligence (AI) has profoundly transformed numerous facets of both private and professional life. Understanding how people evaluate AI is crucial for predicting its future adoption and addressing potential barriers. However, existing instruments measuring attitudes towards AI often focus on specific technologies or cross-domain evaluations, while domain-specific measurement instruments are scarce. Therefore, this study introduces the nine-item <em>Attitudes towards Artificial Intelligence in Work, Healthcare, and Education</em> (ATTARI-WHE) scale. Using a diverse sample of <em>N</em> = 1083 respondents from Germany, the psychometric properties of the instrument were evaluated. The results demonstrated low rates of missing responses, minimal response biases, and a robust measurement model that was invariant across sex, age, education, and employment status. These findings support the use of the ATTARI-WHE to assess AI attitudes in the work, healthcare, and education domains, with three items each. Its brevity makes it particularly well-suited for use in social surveys, web-based studies, or longitudinal research where assessment time is limited.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100106"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
How do people react to political bias in generative artificial intelligence (AI)?
Computers in Human Behavior: Artificial Humans Pub Date : 2024-11-28 DOI: 10.1016/j.chbah.2024.100108
Uwe Messer
{"title":"How do people react to political bias in generative artificial intelligence (AI)?","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100108","DOIUrl":"10.1016/j.chbah.2024.100108","url":null,"abstract":"<div><div>Generative Artificial Intelligence (GAI) such as Large Language Models (LLMs) have a concerning tendency to generate politically biased content. This is a challenge, as the emergence of GAI meets politically polarized societies. Therefore, this research investigates how people react to biased GAI-content based on their pre-existing political beliefs and how this influences the acceptance of GAI. In three experiments (N = 513), it was found that perceived alignment between user's political orientation and bias in generated content (in text and images) increases acceptance and reliance on GAI. Participants who perceived alignment were more likely to grant GAI access to sensitive smartphone functions and to endorse the use in critical domains (e.g., loan approval; social media moderation). Because users see GAI as a social actor, they consider perceived alignment as a sign of greater objectivity, thus granting aligned GAI access to more sensitive areas.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100108"},"PeriodicalIF":0.0,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Attributions of intent and moral responsibility to AI agents
Computers in Human Behavior: Artificial Humans Pub Date : 2024-11-26 DOI: 10.1016/j.chbah.2024.100107
Reem Ayad, Jason E. Plaks
{"title":"Attributions of intent and moral responsibility to AI agents","authors":"Reem Ayad,&nbsp;Jason E. Plaks","doi":"10.1016/j.chbah.2024.100107","DOIUrl":"10.1016/j.chbah.2024.100107","url":null,"abstract":"<div><div>Moral transactions are increasingly infused with decision input from AI agents. To what extent do observers believe that AI agents are responsible for their own actions? How do these AI agents' socio-psychological features affect observers' judgment of them when they transgress? With full factorial, between-participant designs, we presented participants with vignettes in which an AI agent contributed to a negative outcome either intentionally or unintentionally. We independently manipulated four features of the agent's mind: its adherence to moral values, autonomy, emotional self-awareness, and social connectedness. In Study 1 (<em>N</em> = 2012), AI agents that intentionally contributed to a negative outcome consistently received harsher judgments than AI agents that contributed unintentionally. For unintentional actions, socially connected AI agents received less harsh judgments than socially disconnected AI agents. In Studies 2a-c (<em>N</em> = 1507), these judgments were explained by ratings of the socially connected AI agent's ‘mind’ as less distinct from the mind of its programmers (Study 2b) and that this kind of agent also possessed less free will (Study 2c). We discuss the implications of these findings in advancing the field's understanding of the moral psychology—and design—of AI agents.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100107"},"PeriodicalIF":0.0,"publicationDate":"2024-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The fluency-based semantic network of LLMs differs from humans
Computers in Human Behavior: Artificial Humans Pub Date : 2024-11-09 DOI: 10.1016/j.chbah.2024.100103
Ye Wang , Yaling Deng , Ge Wang , Tong Li , Hongjiang Xiao , Yuan Zhang
{"title":"The fluency-based semantic network of LLMs differs from humans","authors":"Ye Wang ,&nbsp;Yaling Deng ,&nbsp;Ge Wang ,&nbsp;Tong Li ,&nbsp;Hongjiang Xiao ,&nbsp;Yuan Zhang","doi":"10.1016/j.chbah.2024.100103","DOIUrl":"10.1016/j.chbah.2024.100103","url":null,"abstract":"<div><div>Modern Large Language Models (LLMs) exhibit complexity and granularity similar to humans in the field of natural language processing, challenging the boundaries between humans and machines in language understanding and creativity. However, whether the semantic network of LLMs is similar to humans is still unclear. We examined the representative closed-source LLMs, GPT-3.5-Turbo and GPT-4, with open-source LLMs, LLaMA-2-70B, LLaMA-3-8B, LLaMA-3-70B using semantic fluency tasks widely used to study the structure of semantic networks in humans. To enhance the comparability of semantic networks between humans and LLMs, we innovatively employed role-playing to generate multiple agents, which is equivalent to recruiting multiple LLM participants. The results indicate that the semantic network of LLMs has poorer interconnectivity, local association organization, and flexibility compared to humans, which suggests that LLMs have lower search efficiency and more rigid thinking in the semantic space and may further affect their performance in creative writing and reasoning.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100103"},"PeriodicalIF":0.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social media influencer vs. virtual influencer: The mediating role of source credibility and authenticity in advertising effectiveness within AI influencer marketing 社交媒体影响者与虚拟影响者:人工智能影响者营销中来源可信度和真实性对广告效果的中介作用
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100100
Donggyu Kim, Zituo Wang
{"title":"Social media influencer vs. virtual influencer: The mediating role of source credibility and authenticity in advertising effectiveness within AI influencer marketing","authors":"Donggyu Kim,&nbsp;Zituo Wang","doi":"10.1016/j.chbah.2024.100100","DOIUrl":"10.1016/j.chbah.2024.100100","url":null,"abstract":"<div><div>This study examines the differences between social media influencers and virtual influencers in influencer marketing, focusing on their impact on marketing effectiveness. Using a between-subjects experimental design, the research explores how human influencers (HIs), human-like virtual influencers (HVIs), and anime-like virtual influencers (AVIs) affect perceptions of authenticity, source credibility, and overall marketing effectiveness. The study evaluates these influencer types across both for-profit and not-for-profit messaging contexts to determine how message intent influences audience reactions. The findings reveal that HVIs can be as effective as human influencers, especially in not-for-profit messaging, where their authenticity and source credibility are higher. However, when the messaging shifts to for-profit motives, the advantage of HVIs diminishes, aligning more closely with AVIs, which consistently show lower effectiveness. The study highlights the critical role that both authenticity and source credibility play in mediating the relationship between the type of influencer and advertising effectiveness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100100"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres 将音效和背景音乐融入机器人讲故事 - 不同故事类型的系列在线研究
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100085
Sophia C. Steinhaeusser, Birgit Lugrin
{"title":"Integrating sound effects and background music in Robotic storytelling – A series of online studies across different story genres","authors":"Sophia C. Steinhaeusser,&nbsp;Birgit Lugrin","doi":"10.1016/j.chbah.2024.100085","DOIUrl":"10.1016/j.chbah.2024.100085","url":null,"abstract":"<div><p>Social robots as storytellers combine advantages of human storytellers – such as embodiment, gestures, and gaze – and audio books – large repertoire of voices, sound effects, and background music. However, research on adding non-speech sounds to robotic storytelling is yet in its infancy. The current series of four online studies investigates the influence of sound effects and background music in robotic storytelling on recipients’ storytelling experience and enjoyment, robot perception, and emotion induction across different story genres, i.e. horror, detective, romantic and humorous stories. Results indicate increased enjoyment for romantic stories and a trend for decreased fatigue for all genres when adding sound effects and background music to the robotic storytelling. Of the four genres examined, horror stories seem to benefit the most from the addition of non-speech sounds. Future research should provide guidelines for the selection of music and sound effects to improve the realization of non-speech sound-accompanied robotic storytelling. In conclusion, our ongoing research suggests that the integration of sound effects and background music holds promise for enhancing robotic storytelling, and our genre comparison provides first guidance of when to use them.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100085"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000458/pdfft?md5=39926971bcbec336bf3117e22eb44704&pid=1-s2.0-S2949882124000458-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141937463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信