{"title":"在知识密集型众包竞赛中采用人工智能队友:透明度和可解释性的作用","authors":"Ziheng Wang, Jiachen Wang, Chengyu Tian, Ahsan Ali, Xicheng Yin","doi":"10.1108/k-02-2024-0478","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>As the role of AI on human teams shifts from a tool to a teammate, the implementation of AI teammates into knowledge-intensive crowdsourcing (KI-C) contest teams represents a forward-thinking and feasible solution to improve team performance. Since contest teams are characterized by virtuality, temporality, competitiveness, and skill diversity, the human-AI interaction mechanism underlying conventional teams is no longer applicable. This study empirically analyzes the effects of AI teammate attributes on human team members’ willingness to adopt AI in crowdsourcing contests.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>A questionnaire-based online experiment was designed to perform behavioral data collection. We obtained 206 valid anonymized samples from 28 provinces in China. The Ordinary Least Squares (OLS) model was used to test the proposed hypotheses.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>We find that the transparency and explainability of AI teammates have mediating effects on human team members’ willingness to adopt AI through trust. Due to the different tendencies exhibited by members with regard to three types of cognitive load, nonlinear U-shaped relationships are observed among explainability, cognitive load, and willingness to adopt AI.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>We provide design ideas for human-AI team mechanisms in KI-C scenarios, and rationally explain how the U-shaped relationship between AI explainability and cognitive load emerges.</p><!--/ Abstract__block -->","PeriodicalId":49930,"journal":{"name":"Kybernetes","volume":"22 1","pages":""},"PeriodicalIF":2.5000,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Adopting AI teammates in knowledge-intensive crowdsourcing contests: the roles of transparency and explainability\",\"authors\":\"Ziheng Wang, Jiachen Wang, Chengyu Tian, Ahsan Ali, Xicheng Yin\",\"doi\":\"10.1108/k-02-2024-0478\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3>Purpose</h3>\\n<p>As the role of AI on human teams shifts from a tool to a teammate, the implementation of AI teammates into knowledge-intensive crowdsourcing (KI-C) contest teams represents a forward-thinking and feasible solution to improve team performance. Since contest teams are characterized by virtuality, temporality, competitiveness, and skill diversity, the human-AI interaction mechanism underlying conventional teams is no longer applicable. This study empirically analyzes the effects of AI teammate attributes on human team members’ willingness to adopt AI in crowdsourcing contests.</p><!--/ Abstract__block -->\\n<h3>Design/methodology/approach</h3>\\n<p>A questionnaire-based online experiment was designed to perform behavioral data collection. We obtained 206 valid anonymized samples from 28 provinces in China. The Ordinary Least Squares (OLS) model was used to test the proposed hypotheses.</p><!--/ Abstract__block -->\\n<h3>Findings</h3>\\n<p>We find that the transparency and explainability of AI teammates have mediating effects on human team members’ willingness to adopt AI through trust. Due to the different tendencies exhibited by members with regard to three types of cognitive load, nonlinear U-shaped relationships are observed among explainability, cognitive load, and willingness to adopt AI.</p><!--/ Abstract__block -->\\n<h3>Originality/value</h3>\\n<p>We provide design ideas for human-AI team mechanisms in KI-C scenarios, and rationally explain how the U-shaped relationship between AI explainability and cognitive load emerges.</p><!--/ Abstract__block -->\",\"PeriodicalId\":49930,\"journal\":{\"name\":\"Kybernetes\",\"volume\":\"22 1\",\"pages\":\"\"},\"PeriodicalIF\":2.5000,\"publicationDate\":\"2024-06-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Kybernetes\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/k-02-2024-0478\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, CYBERNETICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Kybernetes","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/k-02-2024-0478","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, CYBERNETICS","Score":null,"Total":0}
引用次数: 0
摘要
目的 随着人工智能在人类团队中的角色从工具转变为队友,在知识密集型众包(KI-C)竞赛团队中实施人工智能队友是提高团队绩效的一种具有前瞻性的可行解决方案。由于竞赛团队具有虚拟性、时间性、竞争性和技能多样性等特点,传统团队中的人机交互机制已不再适用。本研究实证分析了人工智能队友属性对人类团队成员在众包竞赛中采用人工智能意愿的影响。我们获得了来自中国 28 个省份的 206 份有效匿名样本。结果我们发现,人工智能队友的透明度和可解释性通过信任对人类团队成员采用人工智能的意愿具有中介效应。原创性/价值我们为 KI-C 场景下的人类-人工智能团队机制提供了设计思路,并合理解释了人工智能可解释性与认知负荷之间的 U 型关系是如何产生的。
Adopting AI teammates in knowledge-intensive crowdsourcing contests: the roles of transparency and explainability
Purpose
As the role of AI on human teams shifts from a tool to a teammate, the implementation of AI teammates into knowledge-intensive crowdsourcing (KI-C) contest teams represents a forward-thinking and feasible solution to improve team performance. Since contest teams are characterized by virtuality, temporality, competitiveness, and skill diversity, the human-AI interaction mechanism underlying conventional teams is no longer applicable. This study empirically analyzes the effects of AI teammate attributes on human team members’ willingness to adopt AI in crowdsourcing contests.
Design/methodology/approach
A questionnaire-based online experiment was designed to perform behavioral data collection. We obtained 206 valid anonymized samples from 28 provinces in China. The Ordinary Least Squares (OLS) model was used to test the proposed hypotheses.
Findings
We find that the transparency and explainability of AI teammates have mediating effects on human team members’ willingness to adopt AI through trust. Due to the different tendencies exhibited by members with regard to three types of cognitive load, nonlinear U-shaped relationships are observed among explainability, cognitive load, and willingness to adopt AI.
Originality/value
We provide design ideas for human-AI team mechanisms in KI-C scenarios, and rationally explain how the U-shaped relationship between AI explainability and cognitive load emerges.
期刊介绍:
Kybernetes is the official journal of the UNESCO recognized World Organisation of Systems and Cybernetics (WOSC), and The Cybernetics Society.
The journal is an important forum for the exchange of knowledge and information among all those who are interested in cybernetics and systems thinking.
It is devoted to improvement in the understanding of human, social, organizational, technological and sustainable aspects of society and their interdependencies. It encourages consideration of a range of theories, methodologies and approaches, and their transdisciplinary links. The spirit of the journal comes from Norbert Wiener''s understanding of cybernetics as "The Human Use of Human Beings." Hence, Kybernetes strives for examination and analysis, based on a systemic frame of reference, of burning issues of ecosystems, society, organizations, businesses and human behavior.