协作的人类-人工智能信任(CHAI-T):在人类-人工智能协作中主动管理信任的过程框架

Melanie J. McGrath , Andreas Duenser , Justine Lacey , Cécile Paris
{"title":"协作的人类-人工智能信任(CHAI-T):在人类-人工智能协作中主动管理信任的过程框架","authors":"Melanie J. McGrath ,&nbsp;Andreas Duenser ,&nbsp;Justine Lacey ,&nbsp;Cécile Paris","doi":"10.1016/j.chbah.2025.100200","DOIUrl":null,"url":null,"abstract":"<div><div>Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.</div></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"6 ","pages":"Article 100200"},"PeriodicalIF":0.0000,"publicationDate":"2025-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration\",\"authors\":\"Melanie J. McGrath ,&nbsp;Andreas Duenser ,&nbsp;Justine Lacey ,&nbsp;Cécile Paris\",\"doi\":\"10.1016/j.chbah.2025.100200\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.</div></div>\",\"PeriodicalId\":100324,\"journal\":{\"name\":\"Computers in Human Behavior: Artificial Humans\",\"volume\":\"6 \",\"pages\":\"Article 100200\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2025-08-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior: Artificial Humans\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S2949882125000842\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior: Artificial Humans","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2949882125000842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

协作式人类-人工智能(HAI)团队结合了人类和机器在持续的团队互动中的独特技能和能力,利用了各自的优势。在涉及经常接触新奇事物和不确定性的任务中,适应性强、富有创造力的人类与强大、精确的人工智能(AI)之间的合作有望带来新的解决方案和效率。用户信任对于创建和维护这些协作关系至关重要。在传统形式的人工智能中建立的信任模型通常承认信任前因的三个主要类别的贡献:人类用户的特征、技术的特征和环境因素。然而,HAI团队的出现需要理解人类信任,理解任务上下文和目标的特殊性,集成交互过程,并捕捉信任在团队环境中如何随时间演变。借鉴心理学和计算机科学文献,本文提出的协作性人工智能团队(CHAI-T)的信任过程框架采用了早期模型建立的前因的三方结构,同时结合了团队过程和绩效阶段,以捕捉团队环境中信任的内在动力。这些特性能够对协作AI系统中的信任进行主动管理,对协作AI团队的设计和部署具有实际意义。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Collaborative human-AI trust (CHAI-T): A process framework for active management of trust in human-AI collaboration
Collaborative human-AI (HAI) teaming combines the unique skills and capabilities of humans and machines in sustained teaming interactions leveraging the strengths of each. In tasks involving regular exposure to novelty and uncertainty, collaboration between adaptive, creative humans and powerful, precise artificial intelligence (AI) promises new solutions and efficiencies. User trust is essential to creating and maintaining these collaborative relationships. Established models of trust in traditional forms of AI typically recognize the contribution of three primary categories of trust antecedents: characteristics of the human user, characteristics of the technology, and environmental factors. The emergence of HAI teams, however, requires an understanding of human trust that accounts for the specificity of task contexts and goals, integrates processes of interaction, and captures how trust evolves in a teaming environment over time. Drawing on both the psychological and computer science literature, the process framework of trust in collaborative HAI teams (CHAI-T) presented in this paper adopts the tripartite structure of antecedents established by earlier models, while incorporating team processes and performance phases to capture the dynamism inherent to trust in teaming contexts. These features enable active management of trust in collaborative AI systems, with practical implications for the design and deployment of collaborative HAI teams.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信