{"title":"Using AI chatbots (e.g., CHATGPT) in seeking health-related information online: The case of a common ailment","authors":"Pouyan Esmaeilzadeh , Mahed Maddah , Tala Mirzaei","doi":"10.1016/j.chbah.2025.100127","DOIUrl":"10.1016/j.chbah.2025.100127","url":null,"abstract":"<div><div>In the age of AI, healthcare practices and patient-provider communications can be significantly transformed via AI-based tools and systems that distribute Intelligence on the Internet. This study employs a quantitative approach to explore the public value perceptions of using conversational AI (e.g., CHATGPT) to find health-related information online under non-emergency conditions related to a common ailment. Using structural equation modeling on survey data collected from 231 respondents in the US, our study examines the hypotheses linking hedonic and utilitarian values, user satisfaction, willingness to reuse conversational AI, and intentions to take recommended actions. The results show that both hedonic and utilitarian values strongly influence users' satisfaction with conversational AI. The utilitarian values of ease of use, accuracy, relevance, completeness, timeliness, clarity, variety, timesaving, cost-effectiveness, and privacy concern, and the hedonic values of emotional impact and user engagement are significant predictors of satisfaction with conversational AI. Moreover, satisfaction directly influences users' continued intention to use and their willingness to adopt generated results and medical advice. Also, the mediating effect of satisfaction is crucial as it helps to understand the underlying mechanisms of the relationship between value perceptions and desired use behavior. The study emphasizes considering not only the instrumental benefits but also the enjoyment derived from interacting with conversational AI for healthcare purposes. We believe that this study offers valuable theoretical and practical implications for stakeholders interested in advancing the application of AI chatbots for health information provision. Our study provides insights into AI research by explaining the multidimensional nature of public value grounded in functional and emotional gratification. The practical contributions of this study can be useful for developers and designers of conversational AI, as they can focus on improving the design features of AI chatbots to meet users’ expectations, preferences, and satisfaction and promote their adoption and continued use.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100127"},"PeriodicalIF":0.0,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143350637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contradictory attitudes toward academic AI tools: The effect of awe-proneness and corresponding self-regulation","authors":"Jiajin Tong , Yangmingxi Zhang , Yutong Li","doi":"10.1016/j.chbah.2025.100123","DOIUrl":"10.1016/j.chbah.2025.100123","url":null,"abstract":"<div><h3>Objective</h3><div>Artificial intelligence (AI for short) tools become increasingly popular. To better understand the connections between technology and human beings, this research examines the contradictory impacts of awe-proneness on people's attitudes toward academic AI tools and underlying self-regulation processes, which goes beyond the small-self or self-transcendent hypotheses by further clarifying and elaborating on the complex self-change as a consequence of successful and unsuccessful accommodations induced by awe-proneness.</div></div><div><h3>Method</h3><div>We conducted two studies with Chinese university students and a third study using GPT-3.5 simulations to test on a larger scale and explore age and country differences.</div></div><div><h3>Results</h3><div>Awe-proneness increased both satisfaction and worries about academic AI tools (Study 1, <em>N</em> = 252). Awe-proneness led to satisfaction via promotion and to worries via prevention (Study 2, <em>N</em> = 212). GPT simulation data replicated the above findings and further validated the model across age and country groups (Study 3, simulated <em>N</em> = 1846).</div></div><div><h3>Conclusions</h3><div>This research provides a new perspective to understand the complex nature of awe-proneness and its relation to contradictory AI attitudes. The findings offer novel insights into the rapid application of AI from the perspective of personality psychology. It would further cultivate and promote awe research development both in psychology and in other disciplines.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100123"},"PeriodicalIF":0.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fritz Becker , Celine Ina Spannagl , Jürgen Buder , Markus Huff
{"title":"Performance rather than reputation affects humans’ trust towards an artificial agent","authors":"Fritz Becker , Celine Ina Spannagl , Jürgen Buder , Markus Huff","doi":"10.1016/j.chbah.2025.100122","DOIUrl":"10.1016/j.chbah.2025.100122","url":null,"abstract":"<div><div>To succeed in teamwork with artificial agents, humans have to calibrate their trust towards agents based on information they receive about an agent before interaction (reputation information) as well as on experiences they have during interaction (agent performance). This study (N = 253) focused on the influence of a virtual agent's reputation (high/low) and actual observed performance (high/low) on a human user's behavioral trust (delegation behavior) and self-reported trust (questionnaires) in a cooperative Tetris game. The main findings suggested that agent reputation influences self-reported trust prior to interaction. However, the effect of reputation immediately got overridden by performance of the agent during the interaction. The agent's performance during the interactive task influenced delegation behavior, as well as self-reported trust measured post-interaction. Pre-to post-change in self-reported trust was significantly larger when reputation and performance were incongruent. We concluded that reputation might have had a smaller than expected influence on behavior in the presence of a novel tool that afforded exploration. Our research contributes to understanding trust and delegation dynamics, which is crucial for the design and adequate use of artificial agent team partners in a world of digital transformation.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100122"},"PeriodicalIF":0.0,"publicationDate":"2025-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Who wants to be hired by AI? How message frames and AI transparency impact individuals’ attitudes and behaviors toward companies using AI in hiring","authors":"Ying Xiong, Joon Kyoung Kim","doi":"10.1016/j.chbah.2025.100120","DOIUrl":"10.1016/j.chbah.2025.100120","url":null,"abstract":"<div><div>In recent years, many companies have begun to adopt Artificial intelligence (AI) in their recruitment and personnel selection. Despite the increasing use of AI in hiring, little is known about how companies can better communicate about their AI use with job applicants to increase their positive attitudes and behaviors toward companies. Three experimental studies were conducted to investigate the impact of exposure to gain- and loss-framed messages and AI transparency information (third-party audit vs. sharing AI information with job candidates) in job advertisements on individuals' attitudes, organizational trust, and positive word-of-mouth (WOM) intentions. The results showed that the presence of AI transparency information in job advertisements increases individuals’ favorable attitudes, trust, and positive WOM intention toward companies using AI in hiring. Loss-framed messages than gain-framed messages increased the outcome variables in the context of recruitment process time, but not in the context of unconscious hiring bias.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100120"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Generative artificial intelligence in higher education: Evidence from an analysis of institutional policies and guidelines","authors":"Nora McDonald , Aditya Johri , Areej Ali , Aayushi Hingle Collier","doi":"10.1016/j.chbah.2025.100121","DOIUrl":"10.1016/j.chbah.2025.100121","url":null,"abstract":"<div><div>The release of ChatGPT in November 2022 prompted a massive uptake of generative artificial intelligence (GenAI) across higher education institutions (HEIs). In response, HEIs focused on regulating its use, particularly among students, before shifting towards advocating for its productive integration within teaching and learning. Since then, many HEIs have increasingly provided policies and guidelines to direct GenAI. This paper presents an analysis of documents produced by 116 US universities classified as as high research activity or R1 institutions providing a comprehensive examination of the advice and guidance offered by institutional stakeholders about GenAI. Through an extensive analysis, we found a majority of universities (N = 73, 63%) encourage the use of GenAI, with many offering detailed guidance for its use in the classroom (N = 48, 41%). Over half the institutions provided sample syllabi (N = 65, 56%) and half (N = 58, 50%) provided sample GenAI curriculum and activities that would help instructors integrate and leverage GenAI in their teaching. Notably, the majority of guidance focused on writing activities focused on writing, whereas references to code and STEM-related activities were infrequent, and often vague, even when mentioned (N = 58, 50%). Finally, more than half of institutions talked about the ethics of GenAI on a broad range of topics, including Diversity, Equity and Inclusion (DEI) (N = 60, 52%). Based on our findings we caution that guidance for faculty can become burdensome as policies suggest or imply substantial revisions to existing pedagogical practices.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100121"},"PeriodicalIF":0.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Numeric vs. verbal information: The influence of information quantifiability in Human–AI vs. Human–Human decision support","authors":"Eileen Roesler , Tobias Rieger , Markus Langer","doi":"10.1016/j.chbah.2024.100116","DOIUrl":"10.1016/j.chbah.2024.100116","url":null,"abstract":"<div><div>A number of factors, including different task characteristics, influence trust in human vs. AI decision support. In particular, the aspect of information quantifiability could influence trust and dependence, especially considering that human and AI support may have varying strengths in assessing criteria that differ in their quantifiability. To investigate the effect of information quantifiability we conducted an online experiment (<span><math><mrow><mi>N</mi><mo>=</mo><mn>204</mn></mrow></math></span>) with a 2 (support agent: AI vs. human) <span><math><mo>×</mo></math></span> 2 (quantifiability: low vs. high) between-subjects design, using a simulated recruitment task. The support agent was manipulated via framing, while quantifiability was manipulated by the evaluation criteria in the recruitment paradigm. The analysis revealed higher trust for human over AI support. Moreover, trust was higher in the low than in the high quantifiability condition. Counterintuitively, participants rated the applicants as less qualified than their support agent’s rating, especially noticeable in the low quantifiability condition. Besides reinforcing earlier findings showing higher trust towards human experts than towards AI and showcasing the importance of information quantifiability, the present study also raises questions concerning the perceived leniency of support agents and its impact on trust and behavior.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100116"},"PeriodicalIF":0.0,"publicationDate":"2024-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colin Holbrook , Umesh Krishnamurthy , Paul P. Maglio , Alan R. Wagner
{"title":"Physical anthropomorphism (but not gender presentation) influences trust in household robots","authors":"Colin Holbrook , Umesh Krishnamurthy , Paul P. Maglio , Alan R. Wagner","doi":"10.1016/j.chbah.2024.100114","DOIUrl":"10.1016/j.chbah.2024.100114","url":null,"abstract":"<div><div>This research explores anthropomorphism and gender presentation as prospective determinants of trust in household service robots with respect to care of objects (e.g., clothing, valuables), information (e.g., online passwords, credit card numbers), and living agents (e.g., pets, children). In Experiments 1 and 2, we compared trust in a humanoid robot presenting as male, female, or gender-neutral, finding no effects of gender presentation on any trust outcome. In Experiment 3, a fourth condition depicting a physically nonhumanoid robot was added. Relative to the humanoid conditions, participants reported less willingness to trust the nonhumanoid robot to care for their objects, personal information, or vulnerable agents; the reduced trust in care for objects or information was mediated by appraisals of the nonhumanoid as less intelligent and less likable, whereas the reduced trust in care of agents was mediated by appraisals of the nonhumanoid as less likable and less alive. In a parallel pattern, across all studies, participants’ appraisals of robots as intelligent tracked trust in them to take care of objects or information (but not agents), whereas appraisals of robots as likable and alive tracked trust in care of agents. The results are discussed as they inform past work examining effects of gender presentation and anthropomorphism on perceptions of, and trust in, robots.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100114"},"PeriodicalIF":0.0,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trust and acceptance of AI caregiving robots: The role of ethics and self-efficacy","authors":"Cathy S. Lin, Ying-Feng Kuo, Ting-Yu Wang","doi":"10.1016/j.chbah.2024.100115","DOIUrl":"10.1016/j.chbah.2024.100115","url":null,"abstract":"<div><div>As AI technology rapidly advances, ethical concerns have emerged as a global focus. This study introduces a second-order scale for analyzing AI ethics and proposes a model to examine the intention to use AI caregiving robots. The model incorporates elements from the Unified Theory of Acceptance and Use of Technology (UTAUT)—including social influence and performance expectancy—alongside AI ethics, self-efficacy, and trust in AI. The findings reveal that AI ethics and social influence enhance self-efficacy, which in turn increases trust in AI, performance expectancy, and the intention to use AI caregiving robots. Moreover, trust in AI and performance expectancy directly and positively influence the intention to adopt these robots. By incorporating AI ethics, the model provides a more comprehensive perspective, addressing dimensions often overlooked in conventional models. The proposed model is validated across diverse samples, demonstrating both its theoretical and practical significance in predicting AI usage intentions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100115"},"PeriodicalIF":0.0,"publicationDate":"2024-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143154567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anne-Kathrin Kleine , Insa Schaffernak , Eva Lermer
{"title":"Exploring predictors of AI chatbot usage intensity among students: Within- and between-person relationships based on the technology acceptance model","authors":"Anne-Kathrin Kleine , Insa Schaffernak , Eva Lermer","doi":"10.1016/j.chbah.2024.100113","DOIUrl":"10.1016/j.chbah.2024.100113","url":null,"abstract":"<div><div>The current research investigated the factors associated with the intensity of AI chatbot usage among university students, applying the Technology Acceptance Model (TAM) and its extended version, TAM3. A daily diary study over five days was conducted among university students, distinguishing between inter-individual (between-person) and intra-individual (within-person) variations. Multilevel structural equation modeling (SEM) was used to analyze the data. In Study 1 (<em>N</em> = 72), results indicated that AI chatbot anxiety was associated with perceived ease of use (PEOU) and perceived usefulness (PU), which serially mediated the link with AI chatbot usage intensity. Study 2 (<em>N</em> = 153) supported these findings and further explored the roles of facilitating conditions and subjective norm as additional predictors of PEOU and PU. Results from both studies demonstrated that, at the between-person level, students with higher average levels of PEOU and PU reported more intensive AI chatbot usage. In Study 1, the relationship between PEOU and usage intensity was mediated through PU at the within-person level, while the mediation model was not supported in Study 2. Post-hoc comparisons highlighted much higher variability in PEOU and PU in Study 1 compared to Study 2. The results have practical implications for enhancing AI chatbot adoption in educational settings. Emphasizing user-friendly interfaces, reducing AI-related anxiety, providing robust technical support, and leveraging peer influence may enhance the usage intensity of AI chatbots. This study underscores the necessity of considering both stable individual differences and dynamic daily influences to better understand AI chatbot usage patterns among students.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100113"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preventing promotion-focused goals: The impact of regulatory focus on responsible AI","authors":"Samuel N. Kirshner, Jessica Lawson","doi":"10.1016/j.chbah.2024.100112","DOIUrl":"10.1016/j.chbah.2024.100112","url":null,"abstract":"<div><div>Implementing black-box artificial intelligence (AI) often requires evaluating trade-offs related to responsible AI (RAI) (e.g., the trade-off between performance and features regarding AI's fairness or explainability). Synthesizing theories on regulatory focus and cognitive dissonance, we develop and test a model describing how organizational goals impact the dynamics of AI-based unethical pro-organizational behavior (UPB). First, we show that promotion-focused goals increase AI-based UPB and that RAI values act as a novel mediator. Promotion-focus goals significantly lower fairness in Study 1A and explainability in Study 1B, mediating the relationship between regulatory focus and AI-based UPB. Study 2A further supports RAI values as the driving mechanism of AI-based UPB using a moderation-by-processes design experiment. Study 2B provides evidence that AI-based UPB decisions can, in turn, lead to more unethical RAI values for promotion-focused firms, creating a negative RAI feedback loop within organizations. Our research provides theoretical implications and actionable insights for researchers, organizations, and policymakers seeking to improve the responsible use of AI.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"3 ","pages":"Article 100112"},"PeriodicalIF":0.0,"publicationDate":"2024-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143155181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}