Marie Hornberger , Arne Bewersdorff , Daniel S. Schiff , Claudia Nerdel
{"title":"A multinational assessment of AI literacy among university students in Germany, the UK, and the US","authors":"Marie Hornberger , Arne Bewersdorff , Daniel S. Schiff , Claudia Nerdel","doi":"10.1016/j.chbah.2025.100132","DOIUrl":"10.1016/j.chbah.2025.100132","url":null,"abstract":"<div><div>AI literacy is one of the key competencies that university students – future professionals and citizens – need for their lives and careers in an AI-dominated world. Cross-national research on AI literacy can generate critical insights into trends and gaps needed to improve AI education. In this study, we focus on Germany, the UK, and the US given their leadership in AI adoption, innovation, and proactive engagement in AI policy and education. We assessed the AI literacy of 1,465 students across these three countries using a knowledge test previously validated in Germany. We additionally measure AI self-efficacy, interest in AI, attitudes towards AI, AI use, and students' prior learning experiences. Our analysis based on item response theory demonstrates that the AI literacy test remains effective in measuring AI literacy across different languages and countries. Our findings indicate that the majority of students have a foundational level of AI literacy, as well as relatively high levels of interest and positive attitudes related to AI. Students in Germany tend to have a higher level of AI literacy compared to their peers in the UK and US, whereas students in the UK tend to have more negative attitudes towards AI, and US students have higher AI self-efficacy. Based on these results, we offer recommendations for educators on how to take into account differences in characteristics of students such as attitudes towards AI and prior experiences to create effective learning opportunities. By validating an existing AI literacy test instrument across different countries and languages, we provide an instrument and data which can orient future research and AI literacy assessment.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100132"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143547813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond the monotonic: Enhancing human-robot interaction through affective communication","authors":"Kim Klüber , Linda Onnasch","doi":"10.1016/j.chbah.2025.100131","DOIUrl":"10.1016/j.chbah.2025.100131","url":null,"abstract":"<div><div>As robots increasingly become part of human environments, their ability to convey empathy and emotional expression is critical for effective interaction. While non-verbal cues, such as facial expressions and body language, have been widely researched, the role of verbal communication - especially affective speech - has received less attention, despite being essential in many human-robot interaction scenarios. This study addresses this gap through a laboratory experiment with 157 participants, investigating how a robot's affective speech influences human perceptions and behavior. To explore the effects of varying intonation and content, we manipulated the robot's speech across three conditions: monotonic-neutral, monotonic-emotional, and expressive-emotional. Key measures included attributions of experience and agency (following the Theory of Mind), perceived trustworthiness (cognitive and affective level), and forgiveness. Additionally, the Balloon Analogue Risk Task (BART) was employed to assess dependence behavior objectively, and a teaching task with intentional robot errors was used to measure behavioral forgiveness. Our findings reveal that emotionally expressive speech enhances the robot's perceived capacity for experience (i.e., the ability to feel emotions) and increases affective trustworthiness. The results further suggest that affective content of speech, rather than intonation, is the decisive factor. Consequently, in future robotic applications, the affective content of a robot's communication may play a more critical role than the emotional tone. However, we did not find significant differences in dependence behavior or forgiveness across the varying levels of affective communication. This suggests that while affective speech can influence emotional perceptions of the robot, it does not necessarily alter behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"More is more: Addition bias in large language models","authors":"Luca Santagata , Cristiano De Nobili","doi":"10.1016/j.chbah.2025.100129","DOIUrl":"10.1016/j.chbah.2025.100129","url":null,"abstract":"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From robot to android to humanoid: Does self-referencing influence uncanny valley perceptions of mechanic or anthropomorphic face morphs?","authors":"William D. Weisman, Jorge Peña","doi":"10.1016/j.chbah.2025.100130","DOIUrl":"10.1016/j.chbah.2025.100130","url":null,"abstract":"<div><div>To examine how the self-referencing effect influences uncanny valley perceptions, this study (N = 188) employed an 11-level mechanic-to-human face morph continuum (ranging from 0% to 100% human-likeness in 10% increments) by 2 (self-face vs. stranger-face morphs) within-subjects repeated measures design. Contrary to expectations, self-morphs only enhanced similarity identification and resource allocation. In contrast, anthropomorphic morphs increased human perception, likability, resource allocation, mind perception of experience and agency, and similarity identification, while reducing eerie perceptions relative to mechanical morphs. Individual differences in science fiction and technology affinity influenced responses. Higher affinity participants attributed greater mind perception and showed increased acceptance of synthetic faces. These findings reinforce anthropomorphism as the primary driver of uncanny valley responses, while self-related stimuli exert a limited yet reliable influence on select social perception outcomes. The study also highlighted the role of individual differences in shaping responses to artificial faces.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using AI chatbots (e.g., CHATGPT) in seeking health-related information online: The case of a common ailment","authors":"Pouyan Esmaeilzadeh , Mahed Maddah , Tala Mirzaei","doi":"10.1016/j.chbah.2025.100127","DOIUrl":"10.1016/j.chbah.2025.100127","url":null,"abstract":"<div><div>In the age of AI, healthcare practices and patient-provider communications can be significantly transformed via AI-based tools and systems that distribute Intelligence on the Internet. This study employs a quantitative approach to explore the public value perceptions of using conversational AI (e.g., CHATGPT) to find health-related information online under non-emergency conditions related to a common ailment. Using structural equation modeling on survey data collected from 231 respondents in the US, our study examines the hypotheses linking hedonic and utilitarian values, user satisfaction, willingness to reuse conversational AI, and intentions to take recommended actions. The results show that both hedonic and utilitarian values strongly influence users' satisfaction with conversational AI. The utilitarian values of ease of use, accuracy, relevance, completeness, timeliness, clarity, variety, timesaving, cost-effectiveness, and privacy concern, and the hedonic values of emotional impact and user engagement are significant predictors of satisfaction with conversational AI. Moreover, satisfaction directly influences users' continued intention to use and their willingness to adopt generated results and medical advice. Also, the mediating effect of satisfaction is crucial as it helps to understand the underlying mechanisms of the relationship between value perceptions and desired use behavior. The study emphasizes considering not only the instrumental benefits but also the enjoyment derived from interacting with conversational AI for healthcare purposes. We believe that this study offers valuable theoretical and practical implications for stakeholders interested in advancing the application of AI chatbots for health information provision. Our study provides insights into AI research by explaining the multidimensional nature of public value grounded in functional and emotional gratification. The practical contributions of this study can be useful for developers and designers of conversational AI, as they can focus on improving the design features of AI chatbots to meet users’ expectations, preferences, and satisfaction and promote their adoption and continued use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI anxiety: Explication and exploration of effect on state anxiety when interacting with AI doctors","authors":"Hyun Yang , S. Shyam Sundar","doi":"10.1016/j.chbah.2025.100128","DOIUrl":"10.1016/j.chbah.2025.100128","url":null,"abstract":"<div><div>People often have anxiety toward artificial intelligence (AI) due to lack of transparency about its operation. This study explicates this anxiety by conceptualizing it as a trait, and examines its effect. It hypothesizes that users with higher AI (trait) anxiety would have higher state anxiety when interacting with an AI doctor, compared to those with lower AI (trait) anxiety, in part because it is a deviation from the status quo of being treated by a human doctor. As a solution, it hypothesizes that an AI doctor's explanations for its diagnosis would relieve patients' state anxiety. Furthermore, based on the status quo bias theory and an adaptation of the theory of interactive media effects (TIME) for the study of human-AI interaction (HAII), this study hypothesizes that the affect heuristic triggered by state anxiety would mediate the causal relationship between the source cue of a doctor and user experience (UX) as well as behavioral intentions. A pre-registered 2 (human vs. AI) x 2 (explainable vs. non-explainable) experiment (<em>N</em> = 346) was conducted to test the hypotheses. Data revealed that AI (trait) anxiety is significantly associated with state anxiety. Additionally, data showed that an AI doctor's explanations for its diagnosis significantly reduce state anxiety in patients with high AI (trait) anxiety but increase state anxiety in those with low AI (trait) anxiety, but these effects of explanations are not significant among patients who interact with a human doctor. Theoretical and design implications of these findings and limitations of this study are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143376434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reevaluating personalization in AI-powered service chatbots: A study on identity matching via few-shot learning","authors":"Jan Blömker, Carmen-Maria Albrecht","doi":"10.1016/j.chbah.2025.100126","DOIUrl":"10.1016/j.chbah.2025.100126","url":null,"abstract":"<div><div>This study explores the potential of AI-based few-shot learning in creating distinct service chatbot identities (i.e., based on gender and personality). Further, it examines the impact of customer-chatbot identity congruity on perceived enjoyment, usefulness, ease of use, and future chatbot usage intention. A scenario-based online experiment with a 4 (Chatbot identity: extraverted vs. introverted vs. male vs. female) × 2 (Congruity: matching vs. mismatching) between-subjects design with <em>N</em> = 475 participants was conducted. The results confirmed that customers could distinguish between different chatbot identities created via few-shot learning. Contrary to the initial hypothesis, gender-based personalization led to a stronger future chatbot usage intention than personalization based on personality traits. This finding challenges the assumption that an increased depth of personalization is inherently more effective. Customer-chatbot identity congruity did not significantly impact future chatbot usage intention, questioning existing beliefs about the benefits of identity matching. Perceived enjoyment and perceived usefulness mediated the relationship between chatbot identity and future chatbot usage intention, while perceived ease of use did not. High levels of perceived enjoyment and usefulness were strong predictors for the future chatbot usage intention. Thus, while few-shot learning effectively creates distinct chatbot identities, an increased depth of personalization and identity matching do not significantly influence future chatbot usage intentions. Practitioners should prioritize enhancing perceived enjoyment and usefulness in chatbot interactions to encourage future chatbot use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning through AI-clones: Enhancing self-perception and presentation performance","authors":"Qingxiao Zheng , Zhuoer Chen , Yun Huang","doi":"10.1016/j.chbah.2025.100117","DOIUrl":"10.1016/j.chbah.2025.100117","url":null,"abstract":"<div><div>This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice. AI-clone videos were generated using voice cloning, face swapping, lip-syncing, and body-language simulation, refining the repetition, filler words, and pronunciation of participants' original presentations. The results, viewed through the lens of social comparison theory, showed that AI clones functioned as positive “role models” for encouraging positive social comparisons. Regarding self-perceptions, speech qualities, and self-kindness, the self-recording group showed an increase in pronunciation satisfaction. However, the AI-clone group exhibited greater self-kindness, a wider scope of self-observation, and a meaningful transition from a corrective to an enhancive approach in self-critique. Moreover, machine-rated scores revealed immediate performance gains only within the AI-clone group. Considering individual differences, aligning interventions with participants’ regulatory focus significantly enhanced their learning experience. These findings highlight the theoretical, practical, and ethical implications of AI clones in supporting emotional and cognitive skill development.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100117"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robots as social companions for space exploration","authors":"Matthieu J. Guitton","doi":"10.1016/j.chbah.2025.100124","DOIUrl":"10.1016/j.chbah.2025.100124","url":null,"abstract":"<div><div>Space is the next border that humanity needs to cross to reach new developments. Yet, space exploration faces numerous challenges, especially when it comes to hazard putting in danger human health. While a lot of efforts are being made to mitigate the impact of space travel on physical health, mental health of space travelers is also highly at risk, notably due to isolation and the associated lack of meaningful social interactions. Given the social potentiality of artificial agents, we propose here that social robots could play the role of social partners to mitigate the impact of space travel on mental health. We will explore the logics behind using robots as partners for in-space social training. We will then identify what are the advantages of using social robots for this purpose, either for crew members and passengers on shorter spaceflights, or for potential colons for possible future longer-term space exploration missions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contradictory attitudes toward academic AI tools: The effect of awe-proneness and corresponding self-regulation","authors":"Jiajin Tong , Yangmingxi Zhang , Yutong Li","doi":"10.1016/j.chbah.2025.100123","DOIUrl":"10.1016/j.chbah.2025.100123","url":null,"abstract":"<div><h3>Objective</h3><div>Artificial intelligence (AI for short) tools become increasingly popular. To better understand the connections between technology and human beings, this research examines the contradictory impacts of awe-proneness on people's attitudes toward academic AI tools and underlying self-regulation processes, which goes beyond the small-self or self-transcendent hypotheses by further clarifying and elaborating on the complex self-change as a consequence of successful and unsuccessful accommodations induced by awe-proneness.</div></div><div><h3>Method</h3><div>We conducted two studies with Chinese university students and a third study using GPT-3.5 simulations to test on a larger scale and explore age and country differences.</div></div><div><h3>Results</h3><div>Awe-proneness increased both satisfaction and worries about academic AI tools (Study 1, <em>N</em> = 252). Awe-proneness led to satisfaction via promotion and to worries via prevention (Study 2, <em>N</em> = 212). GPT simulation data replicated the above findings and further validated the model across age and country groups (Study 3, simulated <em>N</em> = 1846).</div></div><div><h3>Conclusions</h3><div>This research provides a new perspective to understand the complex nature of awe-proneness and its relation to contradictory AI attitudes. The findings offer novel insights into the rapid application of AI from the perspective of personality psychology. It would further cultivate and promote awe research development both in psychology and in other disciplines.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}