Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing 人工智能大猎杀:审稿人对研究写作中生成式人工智能的看法和(错误)概念
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100095
Hilda Hadan, Derrick M. Wang, Reza Hadi Mogavi, Joseph Tu, Leah Zhang-Kennedy, Lennart E. Nacke
{"title":"The great AI witch hunt: Reviewers’ perception and (Mis)conception of generative AI in research writing","authors":"Hilda Hadan,&nbsp;Derrick M. Wang,&nbsp;Reza Hadi Mogavi,&nbsp;Joseph Tu,&nbsp;Leah Zhang-Kennedy,&nbsp;Lennart E. Nacke","doi":"10.1016/j.chbah.2024.100095","DOIUrl":"10.1016/j.chbah.2024.100095","url":null,"abstract":"<div><div>Generative AI (GenAI) use in research writing is growing fast. However, it is unclear how peer reviewers recognize or misjudge AI-augmented manuscripts. To investigate the impact of AI-augmented writing on peer reviews, we conducted a snippet-based online survey with 17 peer reviewers from top-tier HCI conferences. Our findings indicate that while AI-augmented writing improves readability, language diversity, and informativeness, it often lacks research details and reflective insights from authors. Reviewers consistently struggled to distinguish between human and AI-augmented writing but their judgements remained consistent. They noted the loss of a “human touch” and subjective expressions in AI-augmented writing. Based on our findings, we advocate for reviewer guidelines that promote impartial evaluations of submissions, regardless of any personal biases towards GenAI. The quality of the research itself should remain a priority in reviews, regardless of any preconceived notions about the tools used to create it. We emphasize that researchers must maintain their authorship and control over the writing process, even when using GenAI's assistance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100095"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142587264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Differences between human and artificial/augmented intelligence in medicine 人类智能和人工智能/增强智能在医学中的区别
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100084
Scott Monteith , Tasha Glenn , John R. Geddes , Eric D. Achtyes , Peter C. Whybrow , Michael Bauer
{"title":"Differences between human and artificial/augmented intelligence in medicine","authors":"Scott Monteith ,&nbsp;Tasha Glenn ,&nbsp;John R. Geddes ,&nbsp;Eric D. Achtyes ,&nbsp;Peter C. Whybrow ,&nbsp;Michael Bauer","doi":"10.1016/j.chbah.2024.100084","DOIUrl":"10.1016/j.chbah.2024.100084","url":null,"abstract":"<div><p>The emphasis on artificial intelligence (AI) is rapidly increasing across many diverse aspects of society. This manuscript discusses some of the key topics related to the expansion of AI. These include a comparison of the unique cognitive capabilities of human intelligence with AI, and the potential risks of using AI in clinical medicine. The general public attitudes towards AI are also discussed, including patient perspectives. As the promotion of AI in high-risk situations such as clinical medicine expands, the limitations, risks and benefits of AI need to be better understood.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100084"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000446/pdfft?md5=de42c1e5a75fbb492e2bc6a082094c1f&pid=1-s2.0-S2949882124000446-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141853511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors 了解人工智能聊天机器人在教育领域的应用:用户行为因素的 PLS-SEM 分析
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100098
Md Rabiul Hasan , Nahian Ismail Chowdhury , Md Hadisur Rahman , Md Asif Bin Syed , JuHyeong Ryu
{"title":"Understanding AI Chatbot adoption in education: PLS-SEM analysis of user behavior factors","authors":"Md Rabiul Hasan ,&nbsp;Nahian Ismail Chowdhury ,&nbsp;Md Hadisur Rahman ,&nbsp;Md Asif Bin Syed ,&nbsp;JuHyeong Ryu","doi":"10.1016/j.chbah.2024.100098","DOIUrl":"10.1016/j.chbah.2024.100098","url":null,"abstract":"<div><div>The integration of Artificial Intelligence (AI) into education is a recent development, with chatbots emerging as a noteworthy addition to this transformative landscape. As online learning platforms rapidly advance, students need to adapt swiftly to excel in this dynamic environment. Consequently, understanding the acceptance of chatbots, particularly those employing Large Language Models (LLM) such as Chat Generative Pretrained Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is of paramount importance. Investigating how students accept and view chatbots is essential to directing their incorporation into Industry 4.0 and enabling a smooth transition to Industry 5.0's customized and human-centered methodology. However, existing research on chatbots in education has overlooked key behavior-related aspects, such as Optimism, Innovativeness, Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and Accuracy, creating a significant literature gap. To address this gap, this study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to investigate the determinant of chatbots adoption in education among students, considering the Technology Readiness Index and Technology Acceptance Model. Utilizing a five-point Likert scale for data collection, we gathered a total of 185 responses, which were analyzed using R-Studio software. We established 12 hypotheses to achieve its objectives. The results showed that Optimism and Innovativeness are positively associated with Perceived Ease of Use and Perceived Usefulness. Conversely, Discomfort and Insecurity negatively impact Perceived Ease of Use, with only Insecurity negatively affecting Perceived Usefulness. Furthermore, Perceived Ease of Use, Perceived Usefulness, Interaction and Engagement, Accuracy, and Responsiveness all significantly contribute to the Intention to Use, whereas Transparency and Ethics have a negative impact on Intention to Use. Finally, Intention to Use mediates the relationships between Interaction, Engagement, Accuracy, Responsiveness, Transparency, Ethics, and Perception of Decision Making. These findings provide insights for future technology designers, elucidating critical user behavior factors influencing chatbots adoption and utilization in educational contexts.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100098"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Making moral decisions with artificial agents as advisors. A fNIRS study 以人工代理为顾问做出道德决策。fNIRS 研究
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100096
Eve Florianne Fabre , Damien Mouratille , Vincent Bonnemains , Grazia Pia Palmiotti , Mickael Causse
{"title":"Making moral decisions with artificial agents as advisors. A fNIRS study","authors":"Eve Florianne Fabre ,&nbsp;Damien Mouratille ,&nbsp;Vincent Bonnemains ,&nbsp;Grazia Pia Palmiotti ,&nbsp;Mickael Causse","doi":"10.1016/j.chbah.2024.100096","DOIUrl":"10.1016/j.chbah.2024.100096","url":null,"abstract":"<div><div>Artificial Intelligence (AI) is on the verge of impacting every domain of our lives. It is increasingly being used as an advisor to assist in making decisions. The present study aimed at investigating the influence of moral arguments provided by AI-advisors (i.e., decision aid tool) on human moral decision-making and the associated neural correlates. Participants were presented with sacrificial moral dilemmas and had to make moral decisions either by themselves (i.e., baseline run) or with AI-advisors that provided utilitarian or deontological arguments (i.e., AI-advised run), while their brain activity was measured using an <em>f</em>NIRS device. Overall, AI-advisors significantly influenced participants. Longer response times and a decrease in right dorsolateral prefrontal cortex activity were observed in response to deontological arguments than to utilitarian arguments. Being provided with deontological arguments by machines appears to have led to a decreased appraisal of the affective response to the dilemmas. This resulted in a reduced level of utilitarianism, supposedly in an attempt to avoid behaving in a less cold-blooded way than machines and preserve their (self-)image. Taken together, these results suggest that motivational power can led to a voluntary up- and down-regulation of affective processes along moral decision-making.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100096"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142554448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Aversion against machines with complex mental abilities: The role of individual differences 对具有复杂心理能力的机器的厌恶:个体差异的作用
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100087
Andrea Grundke , Markus Appel , Jan-Philipp Stein
{"title":"Aversion against machines with complex mental abilities: The role of individual differences","authors":"Andrea Grundke ,&nbsp;Markus Appel ,&nbsp;Jan-Philipp Stein","doi":"10.1016/j.chbah.2024.100087","DOIUrl":"10.1016/j.chbah.2024.100087","url":null,"abstract":"<div><p>Theory suggests that robots with human-like mental capabilities (i.e., high agency and experience) evoke stronger aversion than robots without these capabilities. Yet, while several studies support this prediction, there is also evidence that the mental prowess of robots could be evaluated positively, at least by some individuals. To help resolving this ambivalence, we focused on rather stable individual differences that may shape users’ responses to machines with different levels of (perceived) mental ability. Specifically, we explored four key variables as potential moderators: monotheistic religiosity, the tendency to anthropomorphize, prior attitudes towards robots, and the general affinity for complex technology. Two pre-registered online experiments (<em>N</em><sub><em>1</em></sub> = 391, <em>N</em><sub><em>2</em></sub> = 617) were conducted, using text vignettes to introduce participants to a robot with or without complex, human-like capabilities. Results showed that negative attitudes towards robots increased the relative aversion against machines with (vs. without) complex minds, whereas technology affinity weakened the difference between conditions. Results for monotheistic religiosity turned out mixed, while the tendency to anthropomorphize had no significant impact on the evoked aversion. Overall, we conclude that certain individual differences play an important role in perceptions of machines with complex minds and should be considered in future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100087"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000471/pdfft?md5=d427d8fd14eb2a20aa2d28b06757e636&pid=1-s2.0-S2949882124000471-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141850605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unleashing ChatGPT's impact in higher education: Student and faculty perspectives 释放 ChatGPT 在高等教育中的影响力:学生和教师的观点
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100090
Parsa Rajabi , Parnian Taghipour , Diana Cukierman , Tenzin Doleck
{"title":"Unleashing ChatGPT's impact in higher education: Student and faculty perspectives","authors":"Parsa Rajabi ,&nbsp;Parnian Taghipour ,&nbsp;Diana Cukierman ,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2024.100090","DOIUrl":"10.1016/j.chbah.2024.100090","url":null,"abstract":"<div><div>As Chat Generative Pre-trained Transformer (ChatGPT) gains traction, its impact on post-secondary education is increasingly being debated. This qualitative study explores the perception of students and faculty members at a research university in Canada regarding ChatGPT's use in a post-secondary setting, focusing on how it could be incorporated and what ways instructors can respond to this technology. We present the summary of a discussion that took place in a 2-hour focus group session with 40 participants from the computer science and engineering departments, and highlight issues surrounding plagiarism, assessment methods, and the appropriate use of ChatGPT. Findings suggest that students are likely to use ChatGPT, but there is a need for specific guidelines, more classroom assessments, and mandatory reporting of ChatGPT use. The study contributes to the emergent research on ChatGPT in higher education and emphasizes the importance of proactively addressing challenges and opportunities associated with ChatGPT adoption and use. The novelty of the study involves capturing the perspectives of students and faculty members. This paper aims to provide a more refined understanding of the complex interplay between AI chatbots and higher education that will help educators navigate the rapidly evolving landscape of AI-driven education.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100090"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000501/pdfft?md5=ad3828185881ae4a828f051407953830&pid=1-s2.0-S2949882124000501-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
News bylines and perceived AI authorship: Effects on source and message credibility 新闻署名和认知的人工智能作者:对来源和信息可信度的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100093
Haiyan Jia , Alyssa Appelman , Mu Wu , Steve Bien-Aimé
{"title":"News bylines and perceived AI authorship: Effects on source and message credibility","authors":"Haiyan Jia ,&nbsp;Alyssa Appelman ,&nbsp;Mu Wu ,&nbsp;Steve Bien-Aimé","doi":"10.1016/j.chbah.2024.100093","DOIUrl":"10.1016/j.chbah.2024.100093","url":null,"abstract":"<div><div>With emerging abilities to generate content, artificial intelligence (AI) poses a challenge to identifying authorship of news content. This study focuses on source and message credibility evaluation as AI becomes incorporated into journalistic practices. An experiment (<em>N</em> = 269) explored the effects of news bylines and AI authorship on readers’ perceptions. The findings showed that perceived AI contribution, rather than the labeling of the AI role, predicted readers’ perceptions of the source and the content. When readers thought AI contributed more to a news article, they indicated lower message credibility and source credibility perceptions. Humanness perceptions fully mediated the relationships between perceived AI contribution and perceived message credibility and source credibility. This study yielded theoretical implications for understanding readers’ mental model of machine sourceness and practical implications for newsrooms toward ethical AI in news automation and production.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100093"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142442484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance 人工智能集成中的效率-责任权衡:对人类绩效和过度依赖的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100099
Nicolas Spatola
{"title":"The efficiency-accountability tradeoff in AI integration: Effects on human performance and over-reliance","authors":"Nicolas Spatola","doi":"10.1016/j.chbah.2024.100099","DOIUrl":"10.1016/j.chbah.2024.100099","url":null,"abstract":"<div><div>As artificial intelligence proliferates across various sectors, it is crucial to explore the psychological impacts of over-reliance on these systems. This study examines how different formats of chatbot assistance (instruction-only, answer-only, and combined instruction and answer) influence user performance and reliance over time. In two experiments, participants completed reasoning tests with the aid of a chatbot, \"Cogbot,\" offering varying levels of explanatory detail and direct answers. In Experiment 1, participants receiving direct answers showed higher reliance on the chatbot compared to those receiving instructions, aligning with the practical hypothesis that prioritizes efficiency over explainability. Experiment 2 introduced transfer problems with incorrect AI guidance, revealing that initial reliance on direct answers impaired performance on subsequent tasks when the AI erred, supporting concerns about automation complacency. Findings indicate that while efficiency-focused AI solutions enhance immediate performance, they risk over-assimilation and reduced vigilance, leading to significant performance drops when AI accuracy falters. Conversely, explanatory guidance did not significantly improve outcomes absent of direct answers. These results highlight the complex dynamics between AI efficiency and accountability, suggesting that responsible AI adoption requires balancing streamlined functionality with safeguards against over-reliance.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100099"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can you repeat that again? Investigating the mediating effects of perceived accommodation appropriateness for accommodative voice-based assistants 你能再说一遍吗?调查感知到的通融适宜性对通融型语音助手的中介效应
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100102
Matthew J.A. Craig , Xialing Lin , Chad Edwards , Autumn Edwards
{"title":"Can you repeat that again? Investigating the mediating effects of perceived accommodation appropriateness for accommodative voice-based assistants","authors":"Matthew J.A. Craig ,&nbsp;Xialing Lin ,&nbsp;Chad Edwards ,&nbsp;Autumn Edwards","doi":"10.1016/j.chbah.2024.100102","DOIUrl":"10.1016/j.chbah.2024.100102","url":null,"abstract":"<div><div>The widespread use of Voice-Based Assistants (VBAs) in various applications has introduced a new dimension to human-machine communication. This study explores how users assess VBAs exhibiting either excessive or insufficient communication accommodation in imagined initial interactions. Drawing on Communication Accommodation Theory (CAT) and the Stereotype Content Model (SCM), the present research investigates the mediation effect of perceived accommodation on the relationship between warmth and competence of the SCM and evaluations of the VBA as a communicator and a speaker. Participants evaluated the underaccommodative VBA significantly lower with respect to its communication and evaluations of the VBA as a speaker, which were indirectly predicted by warmth and competence stereotype content models via the perceived appropriateness of the communication. The implications of our findings and future research are discussed.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100102"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142660879","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can ChatGPT read who you are? ChatGPT 能看懂你是谁吗?
Computers in Human Behavior: Artificial Humans Pub Date : 2024-08-01 DOI: 10.1016/j.chbah.2024.100088
Erik Derner , Dalibor Kučera , Nuria Oliver , Jan Zahálka
{"title":"Can ChatGPT read who you are?","authors":"Erik Derner ,&nbsp;Dalibor Kučera ,&nbsp;Nuria Oliver ,&nbsp;Jan Zahálka","doi":"10.1016/j.chbah.2024.100088","DOIUrl":"10.1016/j.chbah.2024.100088","url":null,"abstract":"<div><p>The interplay between artificial intelligence (AI) and psychology, particularly in personality assessment, represents an important emerging area of research. Accurate personality trait estimation is crucial not only for enhancing personalization in human-computer interaction but also for a wide variety of applications ranging from mental health to education. This paper analyzes the capability of a generic chatbot, ChatGPT, to effectively infer personality traits from short texts. We report the results of a comprehensive user study featuring texts written in Czech by a representative population sample of 155 participants. Their self-assessments based on the Big Five Inventory (BFI) questionnaire serve as the ground truth. We compare the personality trait estimations made by ChatGPT against those by human raters and report ChatGPT's competitive performance in inferring personality traits from text. We also uncover a ‘positivity bias’ in ChatGPT's assessments across all personality dimensions and explore the impact of prompt composition on accuracy. This work contributes to the understanding of AI capabilities in psychological assessment, highlighting both the potential and limitations of using large language models for personality inference. Our research underscores the importance of responsible AI development, considering ethical implications such as privacy, consent, autonomy, and bias in AI applications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100088"},"PeriodicalIF":0.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000483/pdfft?md5=e63d2e9d2b171f646e851561d4060bf7&pid=1-s2.0-S2949882124000483-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141843635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信