透明度困境:人工智能信息披露如何侵蚀信任

IF 3.4 2区 管理学 Q2 MANAGEMENT
Oliver Schilke , Martin Reimann
{"title":"透明度困境:人工智能信息披露如何侵蚀信任","authors":"Oliver Schilke ,&nbsp;Martin Reimann","doi":"10.1016/j.obhdp.2025.104405","DOIUrl":null,"url":null,"abstract":"<div><div>As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation.</div></div>","PeriodicalId":48442,"journal":{"name":"Organizational Behavior and Human Decision Processes","volume":"188 ","pages":"Article 104405"},"PeriodicalIF":3.4000,"publicationDate":"2025-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The transparency dilemma: How AI disclosure erodes trust\",\"authors\":\"Oliver Schilke ,&nbsp;Martin Reimann\",\"doi\":\"10.1016/j.obhdp.2025.104405\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation.</div></div>\",\"PeriodicalId\":48442,\"journal\":{\"name\":\"Organizational Behavior and Human Decision Processes\",\"volume\":\"188 \",\"pages\":\"Article 104405\"},\"PeriodicalIF\":3.4000,\"publicationDate\":\"2025-04-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Organizational Behavior and Human Decision Processes\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0749597825000172\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MANAGEMENT\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Organizational Behavior and Human Decision Processes","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0749597825000172","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MANAGEMENT","Score":null,"Total":0}
引用次数: 0

摘要

随着生成式人工智能(AI)进入各种工作任务,关于它的使用是否应该公开以及这种披露的后果的问题已经成为有关数字透明度的公共和学术讨论的中心议题。本文通过以下问题来解决这一争论:披露人工智能的使用是否会损害用户的信任?我们研究了人工智能披露对不同任务中信任的影响——从通过分析进行的沟通到艺术创作——以及主管、下属、教授、分析师和创意人员等个人行为者,以及投资基金等组织行为者。13项实验一致表明,披露人工智能使用情况的演员比不披露的演员更不受信任。根据微观制度理论,我们认为信任的减少可以通过降低对合法性的认知来解释,正如各种实验设计所显示的那样(研究6-8)。此外,我们证明了这种负面影响在不同的披露框架中都存在,超越了算法厌恶,无论人工智能参与是否已知,也无论披露是自愿的还是强制性的,尽管它的影响相对弱于第三方披露(研究9-13)。论文内的荟萃分析表明,在对技术持积极态度和对高人工智能准确性的看法的评估者中,这种信任惩罚被减弱,但并未消除。本文对信任、人工智能、透明度和合法性的研究做出了贡献,表明人工智能披露会损害社会观念,强调透明度不是直接有益的,并强调合法性在信任形成中的核心作用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
The transparency dilemma: How AI disclosure erodes trust
As generative artificial intelligence (AI) has found its way into various work tasks, questions about whether its usage should be disclosed and the consequences of such disclosure have taken center stage in public and academic discourse on digital transparency. This article addresses this debate by asking: Does disclosing the usage of AI compromise trust in the user? We examine the impact of AI disclosure on trust across diverse tasks—from communications via analytics to artistry—and across individual actors such as supervisors, subordinates, professors, analysts, and creatives, as well as across organizational actors such as investment funds. Thirteen experiments consistently demonstrate that actors who disclose their AI usage are trusted less than those who do not. Drawing on micro-institutional theory, we argue that this reduction in trust can be explained by reduced perceptions of legitimacy, as shown across various experimental designs (Studies 6–8). Moreover, we demonstrate that this negative effect holds across different disclosure framings, above and beyond algorithm aversion, regardless of whether AI involvement is known, and regardless of whether disclosure is voluntary or mandatory, though it is comparatively weaker than the effect of third-party exposure (Studies 9–13). A within-paper meta-analysis suggests this trust penalty is attenuated but not eliminated among evaluators with favorable technology attitudes and perceptions of high AI accuracy. This article contributes to research on trust, AI, transparency, and legitimacy by showing that AI disclosure can harm social perceptions, emphasizing that transparency is not straightforwardly beneficial, and highlighting legitimacy’s central role in trust formation.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
8.90
自引率
4.30%
发文量
68
期刊介绍: Organizational Behavior and Human Decision Processes publishes fundamental research in organizational behavior, organizational psychology, and human cognition, judgment, and decision-making. The journal features articles that present original empirical research, theory development, meta-analysis, and methodological advancements relevant to the substantive domains served by the journal. Topics covered by the journal include perception, cognition, judgment, attitudes, emotion, well-being, motivation, choice, and performance. We are interested in articles that investigate these topics as they pertain to individuals, dyads, groups, and other social collectives. For each topic, we place a premium on articles that make fundamental and substantial contributions to understanding psychological processes relevant to human attitudes, cognitions, and behavior in organizations. In order to be considered for publication in OBHDP a manuscript has to include the following: 1.Demonstrate an interesting behavioral/psychological phenomenon 2.Make a significant theoretical and empirical contribution to the existing literature 3.Identify and test the underlying psychological mechanism for the newly discovered behavioral/psychological phenomenon 4.Have practical implications in organizational context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信