{"title":"Measuring the general public artificial intelligence attitudes and literacy: Measurement scales validation by national multistage omnibus survey in Bulgaria","authors":"Ekaterina Markova, Gabriela Yordanova","doi":"10.1016/j.chbah.2025.100193","DOIUrl":"10.1016/j.chbah.2025.100193","url":null,"abstract":"<div><div>This study examines public attitudes toward artificial intelligence (AI) and self-perceived AI literacy in Bulgaria, using two validated instruments: the General Attitudes toward Artificial Intelligence Scale (GAAIS) and the Meta AI Literacy Scale (MAILS). Administered within a national multistage omnibus survey (N = 1006), the study represents the first large-scale assessment of AI-related perceptions in an Eastern European context. The research has a dual focus: to test the psychometric performance of both scales in a new linguistic and methodological setting, and to explore how AI attitudes and literacy are distributed across key sociodemographic groups. Both GAAIS and MAILS demonstrate strong internal consistency in the face-to-face survey setting, supporting their applicability beyond online convenience samples. ANOVA and regression analyses reveal that education and age are significant predictors of AI literacy, while positive AI attitudes also vary by gender. In contrast, negative attitudes appear more evenly distributed, reflecting broader societal concerns. The results confirm the conceptual distinction between attitudes and perceived literacy, and demonstrate the benefit of administering the two instruments in parallel. The study offers a multidimensional approach to measuring public AI readiness and contributes a validated framework for future cross-cultural and policy-oriented research.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100193"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144826862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wolfgang Wagner , George Gaskell , Eva Paraschou , Siqi Lyu , Maria Michali , Athena Vakali
{"title":"Limits of ChatGPT's conversational pragmatics in a Turing test on ethics, commonsense, and cultural sensitivity","authors":"Wolfgang Wagner , George Gaskell , Eva Paraschou , Siqi Lyu , Maria Michali , Athena Vakali","doi":"10.1016/j.chbah.2025.100191","DOIUrl":"10.1016/j.chbah.2025.100191","url":null,"abstract":"<div><div>Does ChatGPT deliver its explicit claim to be culturally sensitive and its implicit claim to be a friendly digital person when conversing with human users? These claims are investigated from the perspective of linguistic pragmatics, particularly Grice's cooperative principle in communication. Following the pattern of real-life communication, turn-taking conversations reveal limitations in the LLM's grasp of the entire contextual setting described in the prompt. The prompts included ethical issues, a hiking adventure, geographical orientation and body movement. For cultural sensitivity the prompts came from a Pakistani Muslim in English language, from a Hindu in English, and from a Chinese in Chinese language. The issues were deeply cultural involving feelings and emotions. Qualitative analysis of the conversation pragmatics showed that ChatGPT is often unable to conduct conversations according to the pragmatic principles of quantity, reliable quality, remaining in focus, and being clear in expression. We conclude that ChatGPT should be promoted as a machine and not a faux human and not be presented as a global LLM but be subdivided into culture-specific modules.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100191"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144779426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kees Maton, Pascale Le Blanc, Philippe van de Calseyde, Anna-Sophie Ulfert
{"title":"Instrumental and experiential attitudes toward (A.I.) augmented decision-making at work","authors":"Kees Maton, Pascale Le Blanc, Philippe van de Calseyde, Anna-Sophie Ulfert","doi":"10.1016/j.chbah.2025.100188","DOIUrl":"10.1016/j.chbah.2025.100188","url":null,"abstract":"<div><div>In augmented decision-making, defined as a process wherein human judgment is complemented with decision-support systems powered by artificial intelligence (A.I.-DSS), employees are expected to monitor and sometimes override system outputs to enhance decision-making performance. Despite the growing use of these costly technologies in organizations, they often fail to add value as employees seem unwilling to delegate some of their tasks to A.I.-DSS or monitor its outputs. Past research has shown that employees differ in their attitudes toward (collaborating with) emerging technologies, and that these attitudes can facilitate or hinder effective technology use. Drawing on literature from technology acceptance (TAM) and user experience (UX), this study qualitatively explored whether employees hold both instrumental (i.e., related to consequences like performance) and experiential (i.e., related to experiences of the process) attitudes toward augmented decision-making, and whether these two types of attitudes differ in terms of their antecedents and outcomes.</div><div>Seventeen semi-structured interviews with A.I.-DSS users from various organizations revealed that experiential attitudes were mentioned more frequently, but were significantly less positive than instrumental attitudes. In terms of antecedents, instrumental attitudes were primarily mentioned in relation to technology (A.I.-DSS) characteristics, whereas experiential attitudes were also related to task and individual characteristics. As for outcomes, instrumental attitudes were solely associated with employees’ intentions to use A.I.-DSS, while experiential attitudes were also mentioned in relation to employee absorption, motivation and stress. These findings highlight the importance of distinguishing between instrumental and experiential attitudes toward augmented decision-making at work.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100188"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144809530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Corrigendum to “From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents” [Comput. Hum. Behav.: Artificial Humans (2024) 100030]","authors":"Theo Araujo , Nadine Bol","doi":"10.1016/j.chbah.2025.100178","DOIUrl":"10.1016/j.chbah.2025.100178","url":null,"abstract":"<div><div>As human-AI interactions become more pervasive, conversational agents are increasingly relevant in our communication environment. While a rich body of research investigates the consequences of one-shot, single interactions with these agents, knowledge is still scarce on how these consequences evolve across regular, repeated interactions in which these agents make use of AI-enabled techniques to enable increasingly personalized conversations and recommendations. By means of a longitudinal experiment (<em>N</em> = 179) with an agent able to personalize a conversation, this study sheds light on how perceptions – about the agent (anthropomorphism and trust), the interaction (dialogue quality and privacy risks), and the information (relevance and credibility) – and behavior (self-disclosure and recommendation adherence) evolve across interactions. The findings highlight the role of interplay between system-initiated personalization and repeated exposure in this process, suggesting the importance of considering the role of AI in communication processes in a dynamic manner.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100178"},"PeriodicalIF":0.0,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144921759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colin Conrad , Anika Nissen , Kya Masoumi , Mayank Ramchandani , Rafael Fecury Braga , Aaron J. Newman
{"title":"Do truthfulness notifications influence perceptions of AI-generated political images? A cognitive investigation with EEG","authors":"Colin Conrad , Anika Nissen , Kya Masoumi , Mayank Ramchandani , Rafael Fecury Braga , Aaron J. Newman","doi":"10.1016/j.chbah.2025.100185","DOIUrl":"10.1016/j.chbah.2025.100185","url":null,"abstract":"<div><div>Political misinformation is a growing problem for democracies, partly due to the rise of widely accessible artificial intelligence-generated content (AIGC). In response, social media platforms are increasingly considering explicit AI content labeling, though the evidence to support the effectiveness of this approach has been mixed. In this paper, we discuss two studies which shed light on antecedent cognitive processes that help explain why and how AIGC labeling impacts user evaluations in the specific context of AI-generated political images. In the first study, we conducted a neurophysiological experiment with 26 participants using EEG event-related potentials (ERPs) and self-report measures to gain deeper insights into the brain processes associated with the evaluations of artificially generated political images and AIGC labels. In the second study, we embedded some of the stimuli from the EEG study into replica YouTube recommendations and administered them to 276 participants online. The results from the two studies suggest that AI-generated political images are associated with heightened attentional and emotional processing. These responses are linked to perceptions of humanness and trustworthiness. Importantly, trustworthiness perceptions can be impacted by effective AIGC labels. We found effects traceable to the brain’s late-stage executive network activity, as reflected by patterns of the P300 and late positive potential (LPP) components. Our findings suggest that AIGC labeling can be an effective approach for addressing online misinformation when the design is carefully considered. Future research could extend these results by pairing more photorealistic stimuli with ecologically valid social-media tasks and multimodal observation techniques to refine label design and personalize interventions across demographic segments.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100185"},"PeriodicalIF":0.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144686532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Justin W. Carter, Justin T. Scott, John D. Barrett
{"title":"The emotional cost of AI chatbots in education: Who benefits and who struggles?","authors":"Justin W. Carter, Justin T. Scott, John D. Barrett","doi":"10.1016/j.chbah.2025.100181","DOIUrl":"10.1016/j.chbah.2025.100181","url":null,"abstract":"<div><div>Recent advancements in large language models have enabled the development of advanced chatbots, offering new opportunities for personalized learning and academic support that could transform the way students learn. Despite their growing popularity and promising benefits, there is limited understanding of the psychological impact. Accordingly, this study examined the effects of chatbot usage on students' positive and negative affect and considered the moderating role of familiarity. Using a pre-post control group design, undergraduate students were divided into two groups to completed an assignment. Groups received the same task, and only differed based on receiving instruction to use or not to use an AI chatbot. Students who used a chatbot reported significantly lower positive affect, with no significant difference in negative affect. Importantly, familiarity with chatbots moderated changes in positive affect such that students with more familiarity with chatbots reported fewer declines. These findings showcase chatbots’ duplicitous effects. While the tools may prove empowering with effective use, they can also decrease the positive aspects of completing assignments for those with less familiarity. These findings underscore the behavioral complexity of AI integration by highlighting how familiarity moderates affective outcomes and how chatbot use may reduce positive emotional engagement without increasing negative affect. Integrating AI tools in education requires not just access and training, but a nuanced understanding of how student behavior and emotional well-being are shaped by their interaction with intelligent systems.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100181"},"PeriodicalIF":0.0,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144655271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Wang , Tong Li , Meixuan Li , Ziyue Cheng , Ge Wang , Hanyue Kang , Yaling Deng , Hongjiang Xiao , Yuan Zhang
{"title":"RVBench: Role values benchmark for role-playing LLMs","authors":"Ye Wang , Tong Li , Meixuan Li , Ziyue Cheng , Ge Wang , Hanyue Kang , Yaling Deng , Hongjiang Xiao , Yuan Zhang","doi":"10.1016/j.chbah.2025.100184","DOIUrl":"10.1016/j.chbah.2025.100184","url":null,"abstract":"<div><div>With the explosive development of Large Language Models (LLMs), the demand for role-playing agents has greatly increased to promote applications such as personalized digital companion and artificial society simulation. In LLM-driven role-playing, the values of agents lay the foundation for their attitudes and behaviors, thus alignment of values is crucial in enhancing the realism of interactions and enriching the user experience. However, a benchmark for evaluating values in role-playing LLMs is absent. In this study, we built a Role Values Dataset (RVD) containing 25 roles as the groundtruth. Additionally, inspired by psychological tests in humans, we proposed a Role Values Benchmark (RVBench) including values rating and values ranking methods to evaluate the values of role-playing LLMs from subjective questionnaires and observed behavior. The values rating method tests the values orientation through the revised Portrait Values Questionnaire (PVQ-RR), which provides a direct and quantitative comparison of the roles to be played. The values ranking method assesses whether the behaviors of agents are consistent with their values’ hierarchical organization when encountering dilemmatic scenarios. Subsequent testing on a selection of both open-source and closed-source LLMs revealed that GLM-4 exhibited values most closely mirroring the roles in the RVD. However, compared to preset roles, there is still a certain gap in the role-playing ability of LLMs, including the consistency, stability and flexibility in value dimensions. These findings prompt a vital need for further research aimed at refining the role-playing capacities of LLMs from a value alignment perspective. The RVD is available at: <span><span>https://github.com/northwang/RVD</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100184"},"PeriodicalIF":0.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elizabeth R. Merwin , Allen C. Hagen , Joseph R. Keebler , Chad Forbes
{"title":"Self-disclosure to AI: People provide personal information to AI and humans equivalently","authors":"Elizabeth R. Merwin , Allen C. Hagen , Joseph R. Keebler , Chad Forbes","doi":"10.1016/j.chbah.2025.100180","DOIUrl":"10.1016/j.chbah.2025.100180","url":null,"abstract":"<div><div>As Artificial Intelligence (AI) increasingly emerges as a tool in therapeutic settings, understanding individuals' willingness to disclose personal information to AI versus humans is critical. This study examined how participants chose between self-disclosure-based and fact-based statements when responses were thought to be analyzed by an AI, a human researcher, or kept private. Participants completed forced-choice trials where they selected a self-disclosure-based or fact-based statement for one of the three agent conditions. Results showed that participants were statistically more likely to select self-disclosure over fact-based statements. Choice for self-disclosure rates were similar for the AI and human researcher, but significantly lower when responses were kept private. Multiple regression analyses revealed that individuals with a higher score on the negative attitude toward AI scale were less likely to choose Self-based statements across the three agent conditions. Overall, individuals were just as likely to choose to self-disclose to an AI as to a human researcher, and more likely to choose either agent over keeping self-disclosure information private. In addition, personality traits and attitudes toward AI were able to significantly influence disclosure choices. These findings provide insights into how individual differences impact the willingness to self-disclose information in human-AI interactions and offer a foundation for exploring the feasibility of AI as a clinical and social tool. Future research should expand on these results to further understand self-disclosure behaviors and evaluate AI's role in therapeutic settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100180"},"PeriodicalIF":0.0,"publicationDate":"2025-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144604432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arthur Bran Herbener , Michał Klincewicz , Lily Frank , Malene Flensborg Damholdt
{"title":"A critical discussion of strategies and ramifications of implementing conversational agents in mental healthcare","authors":"Arthur Bran Herbener , Michał Klincewicz , Lily Frank , Malene Flensborg Damholdt","doi":"10.1016/j.chbah.2025.100182","DOIUrl":"10.1016/j.chbah.2025.100182","url":null,"abstract":"<div><div>In recent years, there has been growing optimism about the potential of conversational agents, such as chatbots and social robots, in mental healthcare. Their scalability offers a promising solution to some of the key limitations of the dominant model of treatment in Western countries. However, while recent experimental research provides grounds for cautious optimism, the integration of conversational agents into mental healthcare raises significant clinical and ethical challenges, particularly concerning the partial or full replacement of human practitioners. Overall, this theoretical paper examines the clinical and ethical implications of deploying conversational agents in mental health services as <em>partial</em> and <em>full</em> replacement of human practitioners. On the one hand, we outline how these agents can circumvent core treatment barriers through stepped care, blended care, and a personalized medicine approach. On the other hand, we argue that the partial and full substitution of human practitioners can have profound consequences for the ethical landscape of mental healthcare, potentially undermining patients’ rights and safety. By making this argument, this work extends prior literature by specifically considering how different levels of implementation of conversational agents in healthcare present both opportunities and risks. We argue for the urgent need to establish regulatory frameworks to ensure that the integration of conversational agents into mental healthcare is both safe and ethically sound.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100182"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144614215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Laura M. Vowels , Rachel R.R. Francois-Walcott , Maëlle Grandjean , Joëlle Darwiche , Matthew J. Vowels
{"title":"Navigating relationships with GenAI chatbots: User attitudes, acceptability, and potential","authors":"Laura M. Vowels , Rachel R.R. Francois-Walcott , Maëlle Grandjean , Joëlle Darwiche , Matthew J. Vowels","doi":"10.1016/j.chbah.2025.100183","DOIUrl":"10.1016/j.chbah.2025.100183","url":null,"abstract":"<div><div>Despite the growing adoption of GenAI chatbots in health and well-being contexts, little is known about public attitudes toward their use for relationship support or the factors shaping acceptance and effectiveness. This study aims to address the research gap across three studies. Study 1 involved five focus groups with 30 young people to gauge general attitudes toward GenAI chatbots in relationship contexts. Study 2 evaluated user experiences during a single relationship intervention session with 20 participants. Study 3 quantitatively measured changes in attitudes toward GenAI chatbots and online interventions among 260 participants, assessed before, immediately after, and two weeks following their interaction with a GenAI chatbot or a writing task. Three main themes emerged in Studies 1 and 2: <em>Accessible First-Line Treatment, Artificial Advice for Human Connection</em>, and <em>Internet Archive</em>. Additionally, Study 1 revealed themes of <em>Privacy vs. Openness</em> and <em>Are We in a Black Mirror Episode?</em>, while Study 2 uncovered themes of <em>Exceeding Expectations</em> and Supporting <em>Neurodivergence</em>. The Study 3 results indicated that GenAI chatbot interactions led to reduced effort expectancy and short-term effects in increased acceptance and decreased objections to GenAI chatbots, though these effects were not sustained at a two-week follow-up. Both intervention types improved general attitudes toward online interventions, suggesting that exposure can enhance the uptake of digital health tools. This research underscores the evolving role of GenAI chatbots in augmenting therapeutic practices, highlighting their potential for personalized, accessible, and effective relationship interventions in the digital age.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"5 ","pages":"Article 100183"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144604433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}