Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures

Daniel Susser
{"title":"Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures","authors":"Daniel Susser","doi":"10.1145/3306618.3314286","DOIUrl":null,"url":null,"abstract":"For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions--what behavioral economists call our choice architectures--are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings--the choice architectures--to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us--effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.","PeriodicalId":418125,"journal":{"name":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"35","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3306618.3314286","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 35

Abstract

For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions--what behavioral economists call our choice architectures--are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the sets of options we choose from and the way those options are framed. Moreover, artificial intelligence and machine learning (AI/ML) makes it possible for those options and their framings--the choice architectures--to be tailored to the individual chooser. They are constructed based on information collected about our individual preferences, interests, aspirations, and vulnerabilities, with the goal of influencing our decisions. At the same time, because we are habituated to these technologies we pay them little notice. They are, as philosophers of technology put it, transparent to us--effectively invisible. I argue that this invisible layer of technological mediation, which structures and influences our decision-making, renders us deeply susceptible to manipulation. Absent a guarantee that these technologies are not being used to manipulate and exploit, individuals will have little reason to trust them.
无形的影响:人工智能和自适应选择架构的伦理
几年来,学者们(有充分的理由)一直担心使用人工智能和机器学习(AI/ML)工具来做出关于我们的决定。直到最近,人们才开始关注一个潜在的更令人担忧的问题:使用人工智能/机器学习来影响我们的决策。我们做出决策的环境——行为经济学家称之为我们的选择架构——正日益充斥着技术。也就是说,在各种各样的情况下,算法越来越多地决定了我们选择的选项集和这些选项的框架方式。此外,人工智能和机器学习(AI/ML)使这些选项及其框架(选择架构)能够根据个人选择量身定制。它们是根据收集到的关于我们个人偏好、兴趣、愿望和弱点的信息构建的,目的是影响我们的决策。同时,因为我们已经习惯了这些技术,所以我们很少注意到它们。正如技术哲学家所说,它们对我们来说是透明的——实际上是看不见的。我认为,这一无形的技术中介层构建并影响着我们的决策,使我们极易受到操纵。如果不能保证这些技术不被用于操纵和利用,个人就没有理由信任它们。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信