Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors 了解人工智能聊天机器人在教育领域的应用:用户行为因素的 PLS-SEM 分析
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100098
Md Rabiul Hasan , Nahian Ismail Chowdhury , Md Hadisur Rahman , Md Asif Bin Syed , JuHyeong Ryu
{"title":"Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors","authors":"Md Rabiul Hasan ,&nbsp;Nahian Ismail Chowdhury ,&nbsp;Md Hadisur Rahman ,&nbsp;Md Asif Bin Syed ,&nbsp;JuHyeong Ryu","doi":"10.1016/j.chbah.2024.100098","DOIUrl":"10.1016/j.chbah.2024.100098","url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making moral decisions with artificial agents as advisors. A fNIRS study 以人工代理为顾问做出道德决策。fNIRS 研究
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100096
Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse
{"title":"Making moral decisions with artificial agents as advisors. A fNIRS study","authors":"Eve Florianne Fabre ,&nbsp;Damien Mouratille ,&nbsp;Vincent Bonnemains ,&nbsp;Grazia Pia Palmiotti ,&nbsp;Mickael Causse","doi":"10.1016/j.chbah.2024.100096","DOIUrl":"10.1016/j.chbah.2024.100096","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an <em>f</em>NIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100096"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aversion against machines with complex mental abilities: The role of individual differences 对具有复杂心理能力的机器的厌恶:个体差异的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100087
Andrea Grundke , Markus Appel , Jan-Philipp Stein
{"title":"Aversion against machines with complex mental abilities: The role of individual differences","authors":"Andrea Grundke ,&nbsp;Markus Appel ,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2024.100087","DOIUrl":"10.1016/j.chbah.2024.100087","url":null,"abstract":"<div><p>Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (<em>N</em><sub><em>1</em></sub> = 391, <em>N</em><sub><em>2</em></sub> = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000471/pdfft?md5=d427d8fd14eb2a20aa2d28b06757e636&pid=1-s2.0-S2949882124000471-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing ChatGPT's impact in higher education: Student and faculty perspectives 释放 ChatGPT 在高等教育中的影响力:学生和教师的观点
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100090
Parsa Rajabi , Parnian Taghipour , Diana Cukierman , Tenzin Doleck
{"title":"Unleashing ChatGPT's impact in higher education: Student and faculty perspectives","authors":"Parsa Rajabi ,&nbsp;Parnian Taghipour ,&nbsp;Diana Cukierman ,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2024.100090","DOIUrl":"10.1016/j.chbah.2024.100090","url":null,"abstract":"<div><div>As Chat Generative Pre-trained Transformer (ChatGPT) gains traction, its impact on post-secondary education is increasingly being debated. This qualitative study explores the perception of students and faculty members at a research university in Canada regarding ChatGPT's use in a post-secondary setting, focusing on how it could be incorporated and what ways instructors can respond to this technology. We present the summary of a discussion that took place in a 2-hour focus group session with 40 participants from the computer science and engineering departments, and highlight issues surrounding plagiarism, assessment methods, and the appropriate use of ChatGPT. Findings suggest that students are likely to use ChatGPT, but there is a need for specific guidelines, more classroom assessments, and mandatory reporting of ChatGPT use. The study contributes to the emergent research on ChatGPT in higher education and emphasizes the importance of proactively addressing challenges and opportunities associated with ChatGPT adoption and use. The novelty of the study involves capturing the perspectives of students and faculty members. This paper aims to provide a more refined understanding of the complex interplay between AI chatbots and higher education that will help educators navigate the rapidly evolving landscape of AI-driven education.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000501/pdfft?md5=ad3828185881ae4a828f051407953830&pid=1-s2.0-S2949882124000501-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
News bylines and perceived AI authorship: Effects on source and message credibility 新闻署名和认知的人工智能作者:对来源和信息可信度的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100093
Haiyan Jia , Alyssa Appelman , Mu Wu , Steve Bien-Aimé
{"title":"News bylines and perceived AI authorship: Effects on source and message credibility","authors":"Haiyan Jia ,&nbsp;Alyssa Appelman ,&nbsp;Mu Wu ,&nbsp;Steve Bien-Aimé","doi":"10.1016/j.chbah.2024.100093","DOIUrl":"10.1016/j.chbah.2024.100093","url":null,"abstract":"<div><div>With emerging abilities to generate content, artificial intelligence (AI) poses a challenge to identifying authorship of news content. This study focuses on source and message credibility evaluation as AI becomes incorporated into journalistic practices. An experiment (<em>N</em> = 269) explored the effects of news bylines and AI authorship on readers’ perceptions. The findings showed that perceived AI contribution, rather than the labeling of the AI role, predicted readers’ perceptions of the source and the content. When readers thought AI contributed more to a news article, they indicated lower message credibility and source credibility perceptions. Humanness perceptions fully mediated the relationships between perceived AI contribution and perceived message credibility and source credibility. This study yielded theoretical implications for understanding readers’ mental model of machine sourceness and practical implications for newsrooms toward ethical AI in news automation and production.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100093"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance 人工智能集成中的效率-责任权衡:对人类绩效和过度依赖的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100099
Nicolas Spatola
{"title":"The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2024.100099","DOIUrl":"10.1016/j.chbah.2024.100099","url":null,"abstract":"<div><div>As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, \"Cogbot,\" offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100099"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can you repeat that again? Investigating the mediating effects of perceived accommodation appropriateness for accommodative voice-based assistants 你能再说一遍吗?调查感知到的通融适宜性对通融型语音助手的中介效应
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100102
Matthew J.A. Craig , Xialing Lin , Chad Edwards , Autumn Edwards
{"title":"Can you repeat that again? Investigating the mediating effects of perceived accommodation appropriateness for accommodative voice-based assistants","authors":"Matthew J.A. Craig ,&nbsp;Xialing Lin ,&nbsp;Chad Edwards ,&nbsp;Autumn Edwards","doi":"10.1016/j.chbah.2024.100102","DOIUrl":"10.1016/j.chbah.2024.100102","url":null,"abstract":"<div><div>The widespread use of Voice-Based Assistants (VBAs) in various applications has introduced a new dimension to human-machine communication. This study explores how users assess VBAs exhibiting either excessive or insufficient communication accommodation in imagined initial interactions. Drawing on Communication Accommodation Theory (CAT) and the Stereotype Content Model (SCM), the present research investigates the mediation effect of perceived accommodation on the relationship between warmth and competence of the SCM and evaluations of the VBA as a communicator and a speaker. Participants evaluated the underaccommodative VBA significantly lower with respect to its communication and evaluations of the VBA as a speaker, which were indirectly predicted by warmth and competence stereotype content models via the perceived appropriateness of the communication. The implications of our findings and future research are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can ChatGPT read who you are? ChatGPT 能看懂你是谁吗?
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100088
Erik Derner , Dalibor Kučera , Nuria Oliver , Jan Zahálka
{"title":"Can ChatGPT read who you are?","authors":"Erik Derner ,&nbsp;Dalibor Kučera ,&nbsp;Nuria Oliver ,&nbsp;Jan Zahálka","doi":"10.1016/j.chbah.2024.100088","DOIUrl":"10.1016/j.chbah.2024.100088","url":null,"abstract":"<div><p>The interplay between artificial intelligence (AI) and psychology, particularly in personality assessment, represents an important emerging area of research. Accurate personality trait estimation is crucial not only for enhancing personalization in human-computer interaction but also for a wide variety of applications ranging from mental health to education. This paper analyzes the capability of a generic chatbot, ChatGPT, to effectively infer personality traits from short texts. We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants. Their self-assessments based on the Big Five Inventory (BFI) questionnaire serve as the ground truth. We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text. We also uncover a ‘positivity bias’ in ChatGPT's assessments across all personality dimensions and explore the impact of prompt composition on accuracy. This work contributes to the understanding of AI capabilities in psychological assessment, highlighting both the potential and limitations of using large language models for personality inference. Our research underscores the importance of responsible AI development, considering ethical implications such as privacy, consent, autonomy, and bias in AI applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000483/pdfft?md5=e63d2e9d2b171f646e851561d4060bf7&pid=1-s2.0-S2949882124000483-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma 了解年轻人对使用人工智能聊天机器人进行心理治疗的态度:自我耻辱感的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100086
Benjamin David Hoffman, Michelle Leanne Oppert, Mikaela Owen
{"title":"Understanding young adults’ attitudes towards using AI chatbots for psychotherapy: The role of self-stigma","authors":"Benjamin David Hoffman,&nbsp;Michelle Leanne Oppert,&nbsp;Mikaela Owen","doi":"10.1016/j.chbah.2024.100086","DOIUrl":"10.1016/j.chbah.2024.100086","url":null,"abstract":"<div><p>Mental disorders impact a large proportion of individuals worldwide, with young adults being particularly susceptible to poor mental health. Past research shows that help-seeking self-stigma plays a vital role in deterring help-seeking among young adults; however, this relationship has primarily been examined in the context of human-delivered psychotherapy. The present study aimed to understand how young adults’ perceptions of help-seeking self-stigma associated with different modes of psychotherapy, specifically human-delivered and artificial intelligence (AI)-delivered, influence attitudes towards using AI chatbots for psychotherapy. This study employed a cross-sectional survey design to measure perceived help-seeking self-stigma and attitudes towards both human- and AI-delivered psychotherapy. The results demonstrated that high help-seeking self-stigma associated with human-delivered psychotherapy was linked to more negative attitudes towards human-delivered psychotherapy but more positive attitudes towards AI-delivered psychotherapy. Moreover, high help-seeking self-stigma associated with AI-delivered psychotherapy was linked to more negative attitudes towards AI-delivered psychotherapy but more positive attitudes towards human-delivered psychotherapy. These findings have important real-world implications for future clinical practice and mental health service delivery. The results indicate that young adults who are reluctant to engage with human-delivered psychotherapy due to help-seeking self-stigma may be more inclined to seek help through alternative modes of psychotherapy, such as AI chatbots. Limitations and future directions are discussed.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100086"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S294988212400046X/pdfft?md5=7105a13b93ecb735c5d2187838096a15&pid=1-s2.0-S294988212400046X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141848422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perils, power and promises: Latent profile analysis on the attitudes towards artificial intelligence (AI) among middle-aged and older adults in Hong Kong 危险、力量与承诺:香港中老年人对人工智能(AI)态度的潜在特征分析
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100091
Ngai-Yin Eric Shum, Hi-Po Bobo Lau
{"title":"Perils, power and promises: Latent profile analysis on the attitudes towards artificial intelligence (AI) among middle-aged and older adults in Hong Kong","authors":"Ngai-Yin Eric Shum,&nbsp;Hi-Po Bobo Lau","doi":"10.1016/j.chbah.2024.100091","DOIUrl":"10.1016/j.chbah.2024.100091","url":null,"abstract":"<div><p>With the increasing influence of artificial intelligence (AI) on various aspects of society, understanding public attitudes towards AI becomes crucial. This study investigated attitudes towards AI among Hong Kong middle-aged and older adults. In June 2023, an online survey was conducted among a sample of 740 smartphone users aged 45 years or older (Max = 78) in Hong Kong. Using exploratory factor analysis, we found three factors from the General Attitude to Artificial Intelligence Scale (GAAIS) - Perils, Power, and Promises. Subsequently, with latent profile analysis we revealed three latent profiles: (i) Enthusiasts (18.4%; high on Promises and Power but low on Perils); (ii) Skeptics (12.3%; high on Perils but low on Promises and Power), and (iii) Indecisive (69.3%; moderate on all three factors). The Enthusiasts were more likely to be male, with higher socio-economic status, better self-rated health, and greater mobile device proficiency, optimism, innovativeness, but also less insecurity with technology, compared to the Indecisive, and then to the Skeptics. Our findings suggest that most middle-aged and older adults in Hong Kong hold an ambivalent view towards AI, appreciating its power and potentials while also cognizant of the perils it may entail. Our findings are timely considering the recent debates on ethical use of AI evoked by smart phone applications such as ChatGPT and will be valuable for practitioners and scholars for developing inclusive AI-facilitated services and applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100091"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000513/pdfft?md5=4615a367816801203b2516b1fae73372&pid=1-s2.0-S2949882124000513-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142136503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信