The Impact of a Strategy of Deception About the Identity of an Artificial Intelligence Teammate on Human Designers

Guanglu Zhang, A. Raina, Ethan Brownell, J. Cagan
{"title":"The Impact of a Strategy of Deception About the Identity of an Artificial Intelligence Teammate on Human Designers","authors":"Guanglu Zhang, A. Raina, Ethan Brownell, J. Cagan","doi":"10.1115/detc2022-88535","DOIUrl":null,"url":null,"abstract":"\n Advances in artificial intelligence (AI) offer new opportunities for human-AI collaboration in engineering design. Human trust in AI is a crucial factor in ensuring an effective human-AI collaboration, and several approaches to enhance human trust in AI have been suggested in prior studies. However, it remains an open question in engineering design whether a strategy of deception about the identity of an AI teammate can effectively calibrate human trust in AI and improve human-AI joint performance. This research assesses the impact of the strategy of deception on human designers through a human subjects study where half of participants are told that they work with an AI teammate (i.e., without deception), and the other half of participants are told that they work with another human participant but in fact they work with an AI teammate (i.e., with deception). The results demonstrate that, for this study, the strategy of deception improves high proficiency human designers’ perceived competency of their teammate. However, the strategy of deception does not raise the average number of team collaborations and does not improve the average performance of high proficiency human designers. For low proficiency human designers, the strategy of deception does not change their perceived competency and helpfulness of their teammate, and further reduces the average number of team collaborations while hurting their average performance at the beginning of the study. The potential reasons behind these results are discussed with an argument against using the strategy of deception in engineering design.","PeriodicalId":394503,"journal":{"name":"Volume 3B: 48th Design Automation Conference (DAC)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 3B: 48th Design Automation Conference (DAC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2022-88535","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Advances in artificial intelligence (AI) offer new opportunities for human-AI collaboration in engineering design. Human trust in AI is a crucial factor in ensuring an effective human-AI collaboration, and several approaches to enhance human trust in AI have been suggested in prior studies. However, it remains an open question in engineering design whether a strategy of deception about the identity of an AI teammate can effectively calibrate human trust in AI and improve human-AI joint performance. This research assesses the impact of the strategy of deception on human designers through a human subjects study where half of participants are told that they work with an AI teammate (i.e., without deception), and the other half of participants are told that they work with another human participant but in fact they work with an AI teammate (i.e., with deception). The results demonstrate that, for this study, the strategy of deception improves high proficiency human designers’ perceived competency of their teammate. However, the strategy of deception does not raise the average number of team collaborations and does not improve the average performance of high proficiency human designers. For low proficiency human designers, the strategy of deception does not change their perceived competency and helpfulness of their teammate, and further reduces the average number of team collaborations while hurting their average performance at the beginning of the study. The potential reasons behind these results are discussed with an argument against using the strategy of deception in engineering design.
人工智能队友身份欺骗策略对人类设计师的影响
人工智能的发展为人类与人工智能在工程设计领域的合作提供了新的机遇。人类对人工智能的信任是确保人类与人工智能有效协作的关键因素,在之前的研究中已经提出了几种增强人类对人工智能信任的方法。然而,在工程设计中,欺骗人工智能队友身份的策略是否能有效地校准人类对人工智能的信任并提高人类与人工智能的联合性能,仍然是一个悬而未决的问题。这项研究通过一项人类受试者研究来评估欺骗策略对人类设计师的影响,其中一半参与者被告知他们与人工智能队友合作(即没有欺骗),另一半参与者被告知他们与另一名人类参与者合作,但实际上他们与人工智能队友合作(即欺骗)。结果表明,在本研究中,欺骗策略提高了高熟练度人类设计师对其队友的感知能力。然而,欺骗策略并不能提高团队合作的平均次数,也不能提高高水平人类设计师的平均表现。对于低熟练度的人类设计师,欺骗策略并没有改变他们感知到的能力和对队友的帮助,并且进一步减少了团队合作的平均次数,同时损害了他们在研究开始时的平均表现。讨论了这些结果背后的潜在原因,并反对在工程设计中使用欺骗策略。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信