{"title":"Using AI chatbots (e.g., CHATGPT) in seeking health-related information online: The case of a common ailment","authors":"Pouyan Esmaeilzadeh , Mahed Maddah , Tala Mirzaei","doi":"10.1016/j.chbah.2025.100127","DOIUrl":"10.1016/j.chbah.2025.100127","url":null,"abstract":"<div><div>In the age of AI, healthcare practices and patient-provider communications can be significantly transformed via AI-based tools and systems that distribute Intelligence on the Internet. This study employs a quantitative approach to explore the public value perceptions of using conversational AI (e.g., CHATGPT) to find health-related information online under non-emergency conditions related to a common ailment. Using structural equation modeling on survey data collected from 231 respondents in the US, our study examines the hypotheses linking hedonic and utilitarian values, user satisfaction, willingness to reuse conversational AI, and intentions to take recommended actions. The results show that both hedonic and utilitarian values strongly influence users' satisfaction with conversational AI. The utilitarian values of ease of use, accuracy, relevance, completeness, timeliness, clarity, variety, timesaving, cost-effectiveness, and privacy concern, and the hedonic values of emotional impact and user engagement are significant predictors of satisfaction with conversational AI. Moreover, satisfaction directly influences users' continued intention to use and their willingness to adopt generated results and medical advice. Also, the mediating effect of satisfaction is crucial as it helps to understand the underlying mechanisms of the relationship between value perceptions and desired use behavior. The study emphasizes considering not only the instrumental benefits but also the enjoyment derived from interacting with conversational AI for healthcare purposes. We believe that this study offers valuable theoretical and practical implications for stakeholders interested in advancing the application of AI chatbots for health information provision. Our study provides insights into AI research by explaining the multidimensional nature of public value grounded in functional and emotional gratification. The practical contributions of this study can be useful for developers and designers of conversational AI, as they can focus on improving the design features of AI chatbots to meet users’ expectations, preferences, and satisfaction and promote their adoption and continued use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI anxiety: Explication and exploration of effect on state anxiety when interacting with AI doctors","authors":"Hyun Yang , S. Shyam Sundar","doi":"10.1016/j.chbah.2025.100128","DOIUrl":"10.1016/j.chbah.2025.100128","url":null,"abstract":"<div><div>People often have anxiety toward artificial intelligence (AI) due to lack of transparency about its operation. This study explicates this anxiety by conceptualizing it as a trait, and examines its effect. It hypothesizes that users with higher AI (trait) anxiety would have higher state anxiety when interacting with an AI doctor, compared to those with lower AI (trait) anxiety, in part because it is a deviation from the status quo of being treated by a human doctor. As a solution, it hypothesizes that an AI doctor's explanations for its diagnosis would relieve patients' state anxiety. Furthermore, based on the status quo bias theory and an adaptation of the theory of interactive media effects (TIME) for the study of human-AI interaction (HAII), this study hypothesizes that the affect heuristic triggered by state anxiety would mediate the causal relationship between the source cue of a doctor and user experience (UX) as well as behavioral intentions. A pre-registered 2 (human vs. AI) x 2 (explainable vs. non-explainable) experiment (<em>N</em> = 346) was conducted to test the hypotheses. Data revealed that AI (trait) anxiety is significantly associated with state anxiety. Additionally, data showed that an AI doctor's explanations for its diagnosis significantly reduce state anxiety in patients with high AI (trait) anxiety but increase state anxiety in those with low AI (trait) anxiety, but these effects of explanations are not significant among patients who interact with a human doctor. Theoretical and design implications of these findings and limitations of this study are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100128"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143376434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reevaluating personalization in AI-powered service chatbots: A study on identity matching via few-shot learning","authors":"Jan Blömker, Carmen-Maria Albrecht","doi":"10.1016/j.chbah.2025.100126","DOIUrl":"10.1016/j.chbah.2025.100126","url":null,"abstract":"<div><div>This study explores the potential of AI-based few-shot learning in creating distinct service chatbot identities (i.e., based on gender and personality). Further, it examines the impact of customer-chatbot identity congruity on perceived enjoyment, usefulness, ease of use, and future chatbot usage intention. A scenario-based online experiment with a 4 (Chatbot identity: extraverted vs. introverted vs. male vs. female) × 2 (Congruity: matching vs. mismatching) between-subjects design with <em>N</em> = 475 participants was conducted. The results confirmed that customers could distinguish between different chatbot identities created via few-shot learning. Contrary to the initial hypothesis, gender-based personalization led to a stronger future chatbot usage intention than personalization based on personality traits. This finding challenges the assumption that an increased depth of personalization is inherently more effective. Customer-chatbot identity congruity did not significantly impact future chatbot usage intention, questioning existing beliefs about the benefits of identity matching. Perceived enjoyment and perceived usefulness mediated the relationship between chatbot identity and future chatbot usage intention, while perceived ease of use did not. High levels of perceived enjoyment and usefulness were strong predictors for the future chatbot usage intention. Thus, while few-shot learning effectively creates distinct chatbot identities, an increased depth of personalization and identity matching do not significantly influence future chatbot usage intentions. Practitioners should prioritize enhancing perceived enjoyment and usefulness in chatbot interactions to encourage future chatbot use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100126"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143388109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning through AI-clones: Enhancing self-perception and presentation performance","authors":"Qingxiao Zheng , Zhuoer Chen , Yun Huang","doi":"10.1016/j.chbah.2025.100117","DOIUrl":"10.1016/j.chbah.2025.100117","url":null,"abstract":"<div><div>This study examines the impact of AI-generated digital clones with self-images (AI-clones) on enhancing perceptions and skills in online presentations. A mixed-design experiment with 44 international students compared self-recording videos (self-recording group) to AI-clone videos (AI-clone group) for online English presentation practice. AI-clone videos were generated using voice cloning, face swapping, lip-syncing, and body-language simulation, refining the repetition, filler words, and pronunciation of participants' original presentations. The results, viewed through the lens of social comparison theory, showed that AI clones functioned as positive “role models” for encouraging positive social comparisons. Regarding self-perceptions, speech qualities, and self-kindness, the self-recording group showed an increase in pronunciation satisfaction. However, the AI-clone group exhibited greater self-kindness, a wider scope of self-observation, and a meaningful transition from a corrective to an enhancive approach in self-critique. Moreover, machine-rated scores revealed immediate performance gains only within the AI-clone group. Considering individual differences, aligning interventions with participants’ regulatory focus significantly enhanced their learning experience. These findings highlight the theoretical, practical, and ethical implications of AI clones in supporting emotional and cognitive skill development.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100117"},"PeriodicalIF":0.0,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143420539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robots as social companions for space exploration","authors":"Matthieu J. Guitton","doi":"10.1016/j.chbah.2025.100124","DOIUrl":"10.1016/j.chbah.2025.100124","url":null,"abstract":"<div><div>Space is the next border that humanity needs to cross to reach new developments. Yet, space exploration faces numerous challenges, especially when it comes to hazard putting in danger human health. While a lot of efforts are being made to mitigate the impact of space travel on physical health, mental health of space travelers is also highly at risk, notably due to isolation and the associated lack of meaningful social interactions. Given the social potentiality of artificial agents, we propose here that social robots could play the role of social partners to mitigate the impact of space travel on mental health. We will explore the logics behind using robots as partners for in-space social training. We will then identify what are the advantages of using social robots for this purpose, either for crew members and passengers on shorter spaceflights, or for potential colons for possible future longer-term space exploration missions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100124"},"PeriodicalIF":0.0,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143377939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contradictory attitudes toward academic AI tools: The effect of awe-proneness and corresponding self-regulation","authors":"Jiajin Tong , Yangmingxi Zhang , Yutong Li","doi":"10.1016/j.chbah.2025.100123","DOIUrl":"10.1016/j.chbah.2025.100123","url":null,"abstract":"<div><h3>Objective</h3><div>Artificial intelligence (AI for short) tools become increasingly popular. To better understand the connections between technology and human beings, this research examines the contradictory impacts of awe-proneness on people's attitudes toward academic AI tools and underlying self-regulation processes, which goes beyond the small-self or self-transcendent hypotheses by further clarifying and elaborating on the complex self-change as a consequence of successful and unsuccessful accommodations induced by awe-proneness.</div></div><div><h3>Method</h3><div>We conducted two studies with Chinese university students and a third study using GPT-3.5 simulations to test on a larger scale and explore age and country differences.</div></div><div><h3>Results</h3><div>Awe-proneness increased both satisfaction and worries about academic AI tools (Study 1, <em>N</em> = 252). Awe-proneness led to satisfaction via promotion and to worries via prevention (Study 2, <em>N</em> = 212). GPT simulation data replicated the above findings and further validated the model across age and country groups (Study 3, simulated <em>N</em> = 1846).</div></div><div><h3>Conclusions</h3><div>This research provides a new perspective to understand the complex nature of awe-proneness and its relation to contradictory AI attitudes. The findings offer novel insights into the rapid application of AI from the perspective of personality psychology. It would further cultivate and promote awe research development both in psychology and in other disciplines.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fritz Becker , Celine Ina Spannagl , Jürgen Buder , Markus Huff
{"title":"Performance rather than reputation affects humans’ trust towards an artificial agent","authors":"Fritz Becker , Celine Ina Spannagl , Jürgen Buder , Markus Huff","doi":"10.1016/j.chbah.2025.100122","DOIUrl":"10.1016/j.chbah.2025.100122","url":null,"abstract":"<div><div>To succeed in teamwork with artificial agents, humans have to calibrate their trust towards agents based on information they receive about an agent before interaction (reputation information) as well as on experiences they have during interaction (agent performance). This study (N = 253) focused on the influence of a virtual agent's reputation (high/low) and actual observed performance (high/low) on a human user's behavioral trust (delegation behavior) and self-reported trust (questionnaires) in a cooperative Tetris game. The main findings suggested that agent reputation influences self-reported trust prior to interaction. However, the effect of reputation immediately got overridden by performance of the agent during the interaction. The agent's performance during the interactive task influenced delegation behavior, as well as self-reported trust measured post-interaction. Pre-to post-change in self-reported trust was significantly larger when reputation and performance were incongruent. We concluded that reputation might have had a smaller than expected influence on behavior in the presence of a novel tool that afforded exploration. Our research contributes to understanding trust and delegation dynamics, which is crucial for the design and adequate use of artificial agent team partners in a world of digital transformation.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Who wants to be hired by AI? How message frames and AI transparency impact individuals’ attitudes and behaviors toward companies using AI in hiring","authors":"Ying Xiong, Joon Kyoung Kim","doi":"10.1016/j.chbah.2025.100120","DOIUrl":"10.1016/j.chbah.2025.100120","url":null,"abstract":"<div><div>In recent years, many companies have begun to adopt Artificial intelligence (AI) in their recruitment and personnel selection. Despite the increasing use of AI in hiring, little is known about how companies can better communicate about their AI use with job applicants to increase their positive attitudes and behaviors toward companies. Three experimental studies were conducted to investigate the impact of exposure to gain- and loss-framed messages and AI transparency information (third-party audit vs. sharing AI information with job candidates) in job advertisements on individuals' attitudes, organizational trust, and positive word-of-mouth (WOM) intentions. The results showed that the presence of AI transparency information in job advertisements increases individuals’ favorable attitudes, trust, and positive WOM intention toward companies using AI in hiring. Loss-framed messages than gain-framed messages increased the outcome variables in the context of recruitment process time, but not in the context of unconscious hiring bias.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines","authors":"Nora McDonald , Aditya Johri , Areej Ali , Aayushi Hingle Collier","doi":"10.1016/j.chbah.2025.100121","DOIUrl":"10.1016/j.chbah.2025.100121","url":null,"abstract":"<div><div>The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Finally, more than half of institutions talked about the ethics of GenAI on a broad range of topics, including Diversity, Equity and Inclusion (DEI) (N = 60, 52%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numeric vs. verbal information: The influence of information quantifiability in Human–AI vs. Human–Human decision support","authors":"Eileen Roesler , Tobias Rieger , Markus Langer","doi":"10.1016/j.chbah.2024.100116","DOIUrl":"10.1016/j.chbah.2024.100116","url":null,"abstract":"<div><div>A number of factors, including different task characteristics, influence trust in human vs. AI decision support. In particular, the aspect of information quantifiability could influence trust and dependence, especially considering that human and AI support may have varying strengths in assessing criteria that differ in their quantifiability. To investigate the effect of information quantifiability we conducted an online experiment (<span><math><mrow><mi>N</mi><mo>=</mo><mn>204</mn></mrow></math></span>) with a 2 (support agent: AI vs. human) <span><math><mo>×</mo></math></span> 2 (quantifiability: low vs. high) between-subjects design, using a simulated recruitment task. The support agent was manipulated via framing, while quantifiability was manipulated by the evaluation criteria in the recruitment paradigm. The analysis revealed higher trust for human over AI support. Moreover, trust was higher in the low than in the high quantifiability condition. Counterintuitively, participants rated the applicants as less qualified than their support agent’s rating, especially noticeable in the low quantifiability condition. Besides reinforcing earlier findings showing higher trust towards human experts than towards AI and showcasing the importance of information quantifiability, the present study also raises questions concerning the perceived leniency of support agents and its impact on trust and behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100116"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}