Computers in Human Behavior: Artificial Humans最新文献

筛选
英文 中文
Experimental evaluation of cognitive agents for collaboration in human-autonomy cyber defense teams 认知代理在人类自主网络防御团队协作中的实验评估
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100148
Yinuo Du , Baptiste Prébot , Tyler Malloy , Fei Fang , Cleotilde Gonzalez
{"title":"Experimental evaluation of cognitive agents for collaboration in human-autonomy cyber defense teams","authors":"Yinuo Du ,&nbsp;Baptiste Prébot ,&nbsp;Tyler Malloy ,&nbsp;Fei Fang ,&nbsp;Cleotilde Gonzalez","doi":"10.1016/j.chbah.2025.100148","DOIUrl":"10.1016/j.chbah.2025.100148","url":null,"abstract":"<div><div>Autonomous agents are becoming increasingly prevalent and capable of collaborating with humans on interdependent tasks as teammates. There is increasing recognition that human-like agents might be natural human collaborators. However, there has been limited work on designing agents according to the principles of human cognition or in empirically testing their teamwork effectiveness. In this study, we introduce the Team Defense Game (TDG), a novel experimental platform for investigating human-autonomy teaming in cyber defense scenarios. We design an agent that relies on episodic memory to determine its actions (<em>Cognitive agent</em>) and compare its effectiveness with two types of autonomous agents: one that relies on heuristic reasoning (<em>Heuristic agent</em>) and one that behaves randomly (<em>Random agent</em>). These agents are compared in a human-autonomy team (HAT) performing a cyber-protection task in the TDG. We systematically evaluate how autonomous teammates’ abilities and competence impact the team’s interaction and outcomes. The results revealed that teams with Cognitive agents are the most effective partners, followed by teams with Heuristic and Random agents. Evaluation of collaborative team process metrics suggests that the cognitive agent is more adaptive to individual play styles of human teammates, but it is also inconsistent and less predictable than the Heuristic agent. Competent agents (Cognitive and Heuristic agents) require less human effort but might cause over-reliance. A post-experiment questionnaire showed that competent agents are rated more trustworthy and cooperative than Random agents. We also found that human participants’ subjective ratings correlate with their team performance, and humans tend to take the credit or responsibility for the team. Our work advances HAT research by providing empirical evidence of how the design of different autonomous agents (cognitive, heuristic, and random) affect team performance and dynamics in cybersecurity contexts. We propose that autonomous agents for HATs should possess both competence and human-like cognition while also ensuring predictable behavior or clear explanations to maintain human trust. Additionally, they should proactively seek human input to enhance teamwork effectiveness.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100148"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring the connecting potential of AI: Integrating human interpersonal listening and parasocial support into human-computer interactions 探索人工智能的连接潜力:将人类人际倾听和副社会支持整合到人机交互中
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100149
Netta Weinstein , Guy Itzchakov , Michael R. Maniaci
{"title":"Exploring the connecting potential of AI: Integrating human interpersonal listening and parasocial support into human-computer interactions","authors":"Netta Weinstein ,&nbsp;Guy Itzchakov ,&nbsp;Michael R. Maniaci","doi":"10.1016/j.chbah.2025.100149","DOIUrl":"10.1016/j.chbah.2025.100149","url":null,"abstract":"<div><div>Conversational artificial intelligence (AI) can be harnessed to provide supportive parasocial interactions that rival or even exceed social support from human interactions. High-quality listening in human conversations fosters social connection that heals interpersonal wounds and lessens loneliness. While AI can furnish advice, listening involves each speaker's perception of positive intention, a quality that AI can only simulate. Can such deep-seated support be provided by AI? This research examined two previously siloed areas of knowledge: the healing capabilities of human interpersonal listening, and the potential for AI to produce parasocial experiences of connection. Three experiments (<em>N</em> = 668) addressed this question through manipulating conversational AI listening to test effects on perceived listening, psychological needs, and state loneliness. We show that when prompted, AI could provide high-quality listening, characterized by careful attention and a positive environment for self-expression. More so, AI's high-quality listening was perceived as better than participants' average human interaction (Studies 1–3). Receiving high-quality listening predicted greater relatedness (Study 3) and autonomy (Studies 2 and 3) need satisfaction after participants discussed rejection (Study 2–3), loneliness (Study 3), and isolating attitudes (Study 3). Despite this, we did not observe downstream lessening of loneliness typically observed in human interactions, even for those who were high in trait loneliness (Study 3). These findings clearly contrast with research on human interactions and hint at the potential power, but also the limits, of AI in replicating supportive human interactions.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100149"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An insight into humans helping Robots: The role of attitudes, anthropomorphic cues, and context of use 洞察人类帮助机器人:态度的作用,拟人化的线索,和使用的背景
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100159
Andreea E. Potinteu , Nadia Said , Georg Jahn , Markus Huff
{"title":"An insight into humans helping Robots: The role of attitudes, anthropomorphic cues, and context of use","authors":"Andreea E. Potinteu ,&nbsp;Nadia Said ,&nbsp;Georg Jahn ,&nbsp;Markus Huff","doi":"10.1016/j.chbah.2025.100159","DOIUrl":"10.1016/j.chbah.2025.100159","url":null,"abstract":"<div><div>Robots are increasingly present in our society. Their successful integration depends, however, on understanding and fostering pro-social behavior towards robots, in this case, helping. To better understand people's reported willingness to help robots across different contexts (delivery, medical, service, and security), we conducted two preregistered studies on a German-speaking population (<em>N</em> = 414, and <em>N</em> = 541, representative of age and gender). We assessed attitudes, knowledge about robots, and anthropomorphism and investigated their effect on reported willingness to help. Results show that positive attitudes significantly predicted reported higher willingness to help. Having more knowledge about robots increased reported willingness to help in Study 2. Additionally, we found no effect of anthropomorphism, neither in the form of robot appearance nor as participants' own view about robots, on reported willingness to help. Furthermore, results point to a context-dependency for willingness to help, with participants preferring to help robots in a medical context compared to a security one, for example. Our findings thus highlight the relevance of context and attitudes in understanding helping behavior towards robots. Additionally, our results raise questions about the relevance of anthropomorphism in pro-sociality toward robots.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100159"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144124986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Becoming dehumanized by a service robot: An empirical examination of what happens when non-humans perceive us as less than full humans 被服务机器人剥夺人性:当非人类认为我们不是完整的人类时,会发生什么
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100163
Magnus Söderlund
{"title":"Becoming dehumanized by a service robot: An empirical examination of what happens when non-humans perceive us as less than full humans","authors":"Magnus Söderlund","doi":"10.1016/j.chbah.2025.100163","DOIUrl":"10.1016/j.chbah.2025.100163","url":null,"abstract":"<div><div>Service robots are expected to become increasingly common, and one fundamental task for them is to detect when a human user is present. Thus, they need to be able to correctly categorize a user as a “user”. So far, however, little is known about how users react to robots' understanding of what a user is in terms of a superordinate social category, namely “human”. Given that we humans are sensitive to how we are categorized by others, particularly when we are dehumanized in the categorization process, it was assumed in the present study that this sensitivity may materialize also when the categorizer is a (humanlike) service robot. This assumption was examined with two between-subjects experiments in which a service robot's categorization of the user was manipulated (low vs. high dehumanization). The main finding was that high robotic dehumanization had a negative impact on the user's overall evaluation of the robot.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100163"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144130860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI literacy and trust: A multi-method study of Human-GAI team collaboration 人工智能素养与信任:人类-人工智能团队协作的多方法研究
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100162
Zilong Pan , Ozias A. Moore , Antigoni Papadimitriou , Jiayan Zhu
{"title":"AI literacy and trust: A multi-method study of Human-GAI team collaboration","authors":"Zilong Pan ,&nbsp;Ozias A. Moore ,&nbsp;Antigoni Papadimitriou ,&nbsp;Jiayan Zhu","doi":"10.1016/j.chbah.2025.100162","DOIUrl":"10.1016/j.chbah.2025.100162","url":null,"abstract":"<div><div>As artificial intelligence (AI) becomes increasingly integrated into team settings for collaboration with humans, understanding the dynamics of trust and AI literacy is essential for enhancing team effectiveness. This study investigates the relationship between trust and AI literacy in human-generative AI (GAI) team collaboration, focusing on how AI literacy affects trust formation in these interactions. Drawing upon foundational teamwork literature and AI literacy frameworks, we conducted a multi-method investigation involving 116 undergraduate team members across 23 project teams throughout a semester. In Study 1, qualitative findings revealed distinct attitudes toward GAI as a teammate, categorized as trust, distrust, and ambivalence. Study 2 employed quantitative methods to determine predictors of trust in GAI, demonstrating that AI knowledge and perceived value—key components of AI literacy—significantly influenced perceptions of trust. Notably, perceptions of GAI accuracy emerged as a critical determinant of trust. Our findings highlight the complex interplay between AI literacy and trust in human-GAI collaboration. We observed a paradox: increased AI literacy can enhance collaboration but may also lead to hesitancy in future AI use. We contribute to advancing the understanding of human-AI collaboration by highlighting the critical role of AI literacy in shaping trust and socio-technical team dynamics. Our study provides evidence demonstrating the importance of targeted AI literacy development in building trust and fostering effective collaboration in human-GAI teams. These findings provide a foundation for research aimed at optimizing human-GAI teamwork and developing adaptive AI literacy frameworks, empowering individuals to effectively engage with AI across diverse collaborative settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100162"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144084627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cognitive phantoms in large language models through the lens of latent variables 从潜在变量的角度看大型语言模型中的认知幻影
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100161
Sanne Peereboom , Inga Schwabe , Bennett Kleinberg
{"title":"Cognitive phantoms in large language models through the lens of latent variables","authors":"Sanne Peereboom ,&nbsp;Inga Schwabe ,&nbsp;Bennett Kleinberg","doi":"10.1016/j.chbah.2025.100161","DOIUrl":"10.1016/j.chbah.2025.100161","url":null,"abstract":"<div><div>Large language models (LLMs) increasingly reach real-world applications, necessitating a better understanding of their behaviour. Their size and complexity complicate traditional assessment methods, causing the emergence of alternative approaches inspired by the field of psychology. Recent studies administering psychometric questionnaires to LLMs report human-like traits in LLMs, potentially influencing LLM behaviour. However, this approach suffers from a validity problem: it presupposes that these traits exist in LLMs and that they are measurable with tools designed for humans. Typical procedures rarely acknowledge the validity problem in LLMs, comparing and interpreting average LLM scores. This study investigates this problem by comparing latent structures of personality between humans and three LLMs using two validated personality questionnaires. Findings suggest that questionnaires designed for humans do not validly measure similar constructs in LLMs, and that these constructs may not exist in LLMs at all, highlighting the need for psychometric analyses of LLM responses to avoid chasing cognitive phantoms.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100161"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Augmented reality and robotics in education: A systematic literature review 增强现实和机器人教育:系统的文献综述
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100157
Christina Pasalidou , Chris Lytridis , Avgoustos Tsinakos , Nikolaos Fachantidis
{"title":"Augmented reality and robotics in education: A systematic literature review","authors":"Christina Pasalidou ,&nbsp;Chris Lytridis ,&nbsp;Avgoustos Tsinakos ,&nbsp;Nikolaos Fachantidis","doi":"10.1016/j.chbah.2025.100157","DOIUrl":"10.1016/j.chbah.2025.100157","url":null,"abstract":"<div><div>Integrating cutting-edge technologies into education has been a continuous goal to enhance teaching and learning experiences. Augmented Reality (AR) and robotics are two emerging technologies that have shown promise in transforming educational environments. This paper presents a systematic review of the literature on the combination of AR and robotics for educational purposes, identifying key applications, benefits, and trends. Using the PRISMA methodology, 69 relevant studies from five major databases were analysed and categorised into three themes: (a) AR and Socially Assistive Robots (SAR), (b) AR-assisted educational robotics, and (c) AR in robotics/engineering education. The review provides insights into how different AR-enhanced robotics applications across primary, secondary, and higher education, provide visualizations, multimodal feedback, and immersive experiences. Key findings suggest that while interactive features of AR and the embodiment of robots show promising results for learning, fostering motivation, excitement, positive attitudes, and enriched educational experiences, challenges such as technological complexity and cost remain barriers to widespread adoption. Future research should focus on pedagogical frameworks and large-scale implementations to optimize AR-robotics integration in diverse educational settings.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100157"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144107933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Promoting online evaluation skills through educational chatbots
Computers in Human Behavior: Artificial Humans Pub Date : 2025-05-01 DOI: 10.1016/j.chbah.2025.100160
Nils Knoth , Carolin Hahnel , Mirjam Ebersbach
{"title":"Promoting online evaluation skills through educational chatbots","authors":"Nils Knoth ,&nbsp;Carolin Hahnel ,&nbsp;Mirjam Ebersbach","doi":"10.1016/j.chbah.2025.100160","DOIUrl":"10.1016/j.chbah.2025.100160","url":null,"abstract":"<div><div>Online evaluation skills such as assessing the credibility and relevance of Internet sources are crucial for students' self-regulated learning on the Internet, yet many struggle to identify reliable information online. While AI-based chatbots have made progress in teaching various skills, their application in improving online evaluation skills remains underexplored. In this study, we present an educational chatbot designed to train university students to evaluate online information. Participants were assigned to one of three conditions: (1) training with the interactive chatbot, (2) training with a static checklist, or (3) no additional training (i.e., baseline condition). In an ecologically valid test that provided a simulated web environment, participants had to identify the most reliable and relevant websites among several non-target websites to solve given problems. Participants in the chatbot condition outperformed those in the baseline condition on this test, while participants in the checklist condition showed no significant advantage over the baseline condition. These findings suggest the potential of educational chatbots as effective tools for improving critical evaluation skills. The implications of using chatbots for scalable educational interventions are discussed, particularly in light of recent advances such as the integration of large language models into search engines and the potential for hybrid intelligence paradigms that combine human oversight with AI-driven learning tools.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100160"},"PeriodicalIF":0.0,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143943167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect “嗯?啊!:自然人类声音与人工智能增强声音的分类偏差受到方言的影响
Computers in Human Behavior: Artificial Humans Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100153
Neil W. Kirk
{"title":"“Eh? Aye!”: Categorisation bias for natural human vs AI-augmented voices is influenced by dialect","authors":"Neil W. Kirk","doi":"10.1016/j.chbah.2025.100153","DOIUrl":"10.1016/j.chbah.2025.100153","url":null,"abstract":"<div><div>Advances in AI-assisted voice technology have made it easier to clone or disguise voices, creating a wide range of synthetic voices using different accents, dialects, and languages. While these developments offer positive applications, they also pose risks for misuse. This raises the question as to whether listeners can reliably distinguish between human and AI-enhanced speech and whether prior experiences and expectations about language varieties that are traditionally less-represented by technology affect this ability. Two experiments were conducted to investigate listeners’ ability to categorise voices as human or AI-enhanced in both a standard and a regional Scottish dialect. Using a Signal Detection Theory framework, both experiments explored participants' sensitivity and categorisation biases. In Experiment 1 (<em>N</em> = 100), a predominantly Scottish sample showed above-chance performance in distinguishing between human and AI-enhanced voices, but there was no significant effect of dialect on sensitivity. However, listeners exhibited a bias toward categorising voices as “human”, which was concentrated within the regional Dundonian Scots dialect. In Experiment 2 (<em>N</em> = 100) participants from southern and eastern England, demonstrated reduced overall sensitivity and a <em>Human Categorisation Bias</em> that was more evenly spread across the two dialects. These findings have implications for the growing use of AI-assisted voice technology in linguistically diverse contexts, highlighting both the potential for enhanced representation of Minority, Indigenous, Non-standard and Dialect (MIND) varieties, and the risks of AI misuse.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100153"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task 我们看到的是我们自己:人类如何应对人工智能在社会决策任务中感知到的不公平行为
Computers in Human Behavior: Artificial Humans Pub Date : 2025-04-15 DOI: 10.1016/j.chbah.2025.100154
Christopher A. Sanchez , Lena Hildenbrand , Naomi Fitter
{"title":"We see them as we are: How humans react to perceived unfair behavior by artificial intelligence in a social decision-making task","authors":"Christopher A. Sanchez ,&nbsp;Lena Hildenbrand ,&nbsp;Naomi Fitter","doi":"10.1016/j.chbah.2025.100154","DOIUrl":"10.1016/j.chbah.2025.100154","url":null,"abstract":"<div><div>The proliferation of artificially intelligent (AI) systems in many everyday contexts has emphasized the need to better understand how humans interact with such systems. Previous research has suggested that individuals in many applied contexts believe that these systems are less biased than human counterparts, and thus more trustworthy decision makers. The current study examined whether this common assumption was actually true when placed in a decision-making task that also contains a strong social component (i.e., the Ultimatum Game). Anthropomorphic appearance of AI opponents was also manipulated to determine whether visual appearance also contributes to response behavior. Results indicated that participants treated AI agents identically to humans, and not as non-intelligent (e.g., random number generator-based) systems. This was manifested in both how they responded to offers from the AI system, and also how fairly they subsequently treated the AI opponent. The current results suggest that humans treat AI systems very similarly to other humans, and not as privileged decision makers, which has both positive and negative implications for human-autonomy teaming.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"4 ","pages":"Article 100154"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信