Corrupted by Algorithms? How AI-Generated and Human-Written Advice Shape (DIS)Honesty

IF 3.8 2区 经济学 Q1 ECONOMICS
Margarita Leib, Nils Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch
{"title":"Corrupted by Algorithms? How AI-Generated and Human-Written Advice Shape (DIS)Honesty","authors":"Margarita Leib, Nils Köbis, Rainer Michael Rilke, Marloes Hagens, Bernd Irlenbusch","doi":"10.1093/ej/uead056","DOIUrl":null,"url":null,"abstract":"Abstract Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AI- and human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.","PeriodicalId":48448,"journal":{"name":"Economic Journal","volume":"20 1","pages":"0"},"PeriodicalIF":3.8000,"publicationDate":"2023-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Economic Journal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1093/ej/uead056","RegionNum":2,"RegionCategory":"经济学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

Abstract

Abstract Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-Processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AI- and human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.
被算法破坏了?人工智能和人类书面建议如何塑造诚实
人工智能(AI)日益成为不可或缺的顾问。如果人工智能说服人们做出不诚实的行为,新的伦理问题就会出现。在一个实验中,我们研究了人工智能建议(由自然语言处理算法生成)如何影响(不)诚实,将其与等价的人类建议进行比较,并测试建议来源的透明度是否重要。我们发现,鼓励不诚实的建议会增加不诚实,而鼓励诚实的建议不会增加诚实。人工智能和人类的建议都是如此。算法透明度(algorithm transparency)是一项通常被提议用来降低人工智能风险的政策,它不会影响人们的行为。这些发现标志着负责任地管理人工智能建议迈出了第一步。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Economic Journal
Economic Journal ECONOMICS-
CiteScore
6.60
自引率
3.10%
发文量
82
期刊介绍: The Economic Journal is the Royal Economic Society''s flagship title, and is one of the founding journals of modern economics. Over the past 125 years the journal has provided a platform for high quality and imaginative economic research, earning a worldwide reputation excellence as a general journal publishing papers in all fields of economics for a broad international readership. It is invaluable to anyone with an active interest in economic issues and is a key source for professional economists in higher education, business, government and the financial sector who want to keep abreast of current thinking in economics.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信