Anna M.H. Abrams, Lena Plum, Astrid M. Rosenthal-von der Pütten
{"title":"Teaming up with robots: Analysing potential and challenges with healthcare workers and defining teamwork","authors":"Anna M.H. Abrams, Lena Plum, Astrid M. Rosenthal-von der Pütten","doi":"10.1016/j.chbah.2025.100136","DOIUrl":"10.1016/j.chbah.2025.100136","url":null,"abstract":"<div><div>In interviews with healthcare workers, we explore the potential and challenges of future deployment of robotic assistance systems in healthcare. We focus on individual expectations, wishes and fears. We especially emphasize the potential role of robotic systems in team dynamics. Will robots be coworkers in the future or are they expected to be tools? Irrespective of that, are they anticipated to change coworking within a team? Following a grounded theoretical approach, we aim to generate new theories and research questions on robotic assistance systems from the perspective of healthcare workers. We find that healthcare workers are generally optimistic about the implementation of technology in their workplace and not all have pressing concerns. Paradoxically, we further find that reasoning for why a robot could be part of a team are similar among participants who are opposed to robots in teams and those who are in favour. While participants only focused on work- and task-related criteria when arguing why a robot could be a future colleague, they presented work-and task-unrelated criteria for why a robot could not be one. We discuss the expected impact of robotic assistance systems on work teams in healthcare and pose resulting questions to inspire future hypothetico-inferential research on human-robot teams. We conclude with arguing why current team definitions do not fit human-robot teams and propose a new theoretical model for teams that include humans and robots.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100136"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143547856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hanna Campbell, Samantha Goldman, Patrick M. Markey
{"title":"Artificial intelligence and human decision making: Exploring similarities in cognitive bias","authors":"Hanna Campbell, Samantha Goldman, Patrick M. Markey","doi":"10.1016/j.chbah.2025.100138","DOIUrl":"10.1016/j.chbah.2025.100138","url":null,"abstract":"<div><div>This research explores the extent to which Artificial Personas (APs) generated by Large Language Models (LLMs), like ChatGPT, can exhibit cognitive biases similar to those observed in humans. Four studies focusing on well-documented psychological biases were conducted: the Halo Effect, In-Group Out-Group Bias, the False Consensus Effect, and the Anchoring Effect. Each study was designed to test whether APs respond to specific scenarios consistent with typical human responses documented in psychological literature. The findings reveal that APs can replicate these biases, suggesting that APs can model some aspects of human cognitive processing. However, the effect sizes observed were unusually large, suggesting that APs replicate and exaggerate these biases, behaving more like caricatures of human cognitive behavior. This exaggeration highlights the potential of APs to magnify underlying cognitive processes but also necessitates caution in applying these findings directly to human behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100138"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143547814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Xu , Trisha Thomas , Chi-Lin Yu , Echo Zexuan Pan
{"title":"What makes children perceive or not perceive minds in generative AI?","authors":"Ying Xu , Trisha Thomas , Chi-Lin Yu , Echo Zexuan Pan","doi":"10.1016/j.chbah.2025.100135","DOIUrl":"10.1016/j.chbah.2025.100135","url":null,"abstract":"<div><div>Children are increasingly engaging in dialogue and interactions with generative AI agents that can mimic human behaviors, raising questions about how children perceive and communicate with AI compared to humans. In an experimental study with 119 children aged 4–8, participants co-created stories in three conditions: with a generative AI agent via a speaker, with a physically present human partner, or with a human partner who was hidden and audible only through a speaker. Results showed a clear distinction in children's communication and perception of visible human partners compared to AI. Nuanced differences also emerged in children's perceptions of hidden human partners versus AI. When physical appearance was absent, children relied on linguistic and paralinguistic cues to assess human-likeness and form perceptions, but physical appearance became a more dominant factor when available. These results shed light on implications for the design of child-facing AI technologies, offering insights into how speech and physical features can be optimized to meet children's developmental and communicative needs.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100135"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143580287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to “Human divergent exploration capacity for material design: A comparison with artificial intelligence” [Comput. Hum. Behav.: Artificial Humans 2/1 (2024) 100064]","authors":"Hiroyuki Sakai, Kenroh Matsuda, Nobuaki Kikkawa, Seiji Kajita","doi":"10.1016/j.chbah.2025.100119","DOIUrl":"10.1016/j.chbah.2025.100119","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100119"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Erratum to “Choosing between human and algorithmic advisors: The role of responsibility sharing”[Comput. Hum. Behav.: Artificial Humans 1/2 (2023) 100009]","authors":"Lior Gazit , Ofer Arazy , Uri Hertz","doi":"10.1016/j.chbah.2025.100118","DOIUrl":"10.1016/j.chbah.2025.100118","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100118"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Science in a troubled era: Transforming challenges into opportunities for artificial intelligence, social robots, and artificial humans research","authors":"Matthieu J. Guitton","doi":"10.1016/j.chbah.2025.100125","DOIUrl":"10.1016/j.chbah.2025.100125","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100125"},"PeriodicalIF":0.0,"publicationDate":"2025-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143527502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie Hornberger , Arne Bewersdorff , Daniel S. Schiff , Claudia Nerdel
{"title":"A multinational assessment of AI literacy among university students in Germany, the UK, and the US","authors":"Marie Hornberger , Arne Bewersdorff , Daniel S. Schiff , Claudia Nerdel","doi":"10.1016/j.chbah.2025.100132","DOIUrl":"10.1016/j.chbah.2025.100132","url":null,"abstract":"<div><div>AI literacy is one of the key competencies that university students – future professionals and citizens – need for their lives and careers in an AI-dominated world. Cross-national research on AI literacy can generate critical insights into trends and gaps needed to improve AI education. In this study, we focus on Germany, the UK, and the US given their leadership in AI adoption, innovation, and proactive engagement in AI policy and education. We assessed the AI literacy of 1,465 students across these three countries using a knowledge test previously validated in Germany. We additionally measure AI self-efficacy, interest in AI, attitudes towards AI, AI use, and students' prior learning experiences. Our analysis based on item response theory demonstrates that the AI literacy test remains effective in measuring AI literacy across different languages and countries. Our findings indicate that the majority of students have a foundational level of AI literacy, as well as relatively high levels of interest and positive attitudes related to AI. Students in Germany tend to have a higher level of AI literacy compared to their peers in the UK and US, whereas students in the UK tend to have more negative attitudes towards AI, and US students have higher AI self-efficacy. Based on these results, we offer recommendations for educators on how to take into account differences in characteristics of students such as attitudes towards AI and prior experiences to create effective learning opportunities. By validating an existing AI literacy test instrument across different countries and languages, we provide an instrument and data which can orient future research and AI literacy assessment.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100132"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143547813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Beyond the monotonic: Enhancing human-robot interaction through affective communication","authors":"Kim Klüber , Linda Onnasch","doi":"10.1016/j.chbah.2025.100131","DOIUrl":"10.1016/j.chbah.2025.100131","url":null,"abstract":"<div><div>As robots increasingly become part of human environments, their ability to convey empathy and emotional expression is critical for effective interaction. While non-verbal cues, such as facial expressions and body language, have been widely researched, the role of verbal communication - especially affective speech - has received less attention, despite being essential in many human-robot interaction scenarios. This study addresses this gap through a laboratory experiment with 157 participants, investigating how a robot's affective speech influences human perceptions and behavior. To explore the effects of varying intonation and content, we manipulated the robot's speech across three conditions: monotonic-neutral, monotonic-emotional, and expressive-emotional. Key measures included attributions of experience and agency (following the Theory of Mind), perceived trustworthiness (cognitive and affective level), and forgiveness. Additionally, the Balloon Analogue Risk Task (BART) was employed to assess dependence behavior objectively, and a teaching task with intentional robot errors was used to measure behavioral forgiveness. Our findings reveal that emotionally expressive speech enhances the robot's perceived capacity for experience (i.e., the ability to feel emotions) and increases affective trustworthiness. The results further suggest that affective content of speech, rather than intonation, is the decisive factor. Consequently, in future robotic applications, the affective content of a robot's communication may play a more critical role than the emotional tone. However, we did not find significant differences in dependence behavior or forgiveness across the varying levels of affective communication. This suggests that while affective speech can influence emotional perceptions of the robot, it does not necessarily alter behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100131"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"More is more: Addition bias in large language models","authors":"Luca Santagata , Cristiano De Nobili","doi":"10.1016/j.chbah.2025.100129","DOIUrl":"10.1016/j.chbah.2025.100129","url":null,"abstract":"<div><div>In this paper, we investigate the presence of addition bias in Large Language Models (LLMs), drawing a parallel to the cognitive bias observed in humans where individuals tend to favor additive over sub-tractive changes [3]. Using a series of controlled experiments, we tested various LLMs, including GPT-3.5 Turbo, Claude 3.5 Sonnet, Mistral, Math<em>Σ</em>tral, and Llama 3.1, on tasks designed to measure their propensity for additive versus subtractive modifications. Our findings demonstrate a significant preference for additive changes across all tested models. For example, in a palindrome creation task, Llama 3.1 favored adding let-ters 97.85% of the time over removing them. Similarly, in a Lego tower balancing task, GPT-3.5 Turbo chose to add a brick 76.38% of the time rather than remove one. In a text summarization task, Mistral 7B pro-duced longer summaries in 59.40%–75.10% of cases when asked to improve its own or others’ writing. These results indicate that, similar to humans, LLMs exhibit a marked addition bias, which might have im-plications when LLMs are used on a large scale. Addittive bias might increase resource use and environmental impact, leading to higher eco-nomic costs due to overconsumption and waste. This bias should be con-sidered in the development and application of LLMs to ensure balanced and efficient problem-solving approaches.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100129"},"PeriodicalIF":0.0,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143454807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"From robot to android to humanoid: Does self-referencing influence uncanny valley perceptions of mechanic or anthropomorphic face morphs?","authors":"William D. Weisman, Jorge Peña","doi":"10.1016/j.chbah.2025.100130","DOIUrl":"10.1016/j.chbah.2025.100130","url":null,"abstract":"<div><div>To examine how the self-referencing effect influences uncanny valley perceptions, this study (N = 188) employed an 11-level mechanic-to-human face morph continuum (ranging from 0% to 100% human-likeness in 10% increments) by 2 (self-face vs. stranger-face morphs) within-subjects repeated measures design. Contrary to expectations, self-morphs only enhanced similarity identification and resource allocation. In contrast, anthropomorphic morphs increased human perception, likability, resource allocation, mind perception of experience and agency, and similarity identification, while reducing eerie perceptions relative to mechanical morphs. Individual differences in science fiction and technology affinity influenced responses. Higher affinity participants attributed greater mind perception and showed increased acceptance of synthetic faces. These findings reinforce anthropomorphism as the primary driver of uncanny valley responses, while self-related stimuli exert a limited yet reliable influence on select social perception outcomes. The study also highlighted the role of individual differences in shaping responses to artificial faces.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100130"},"PeriodicalIF":0.0,"publicationDate":"2025-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143471655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}