智能技术与我们的自我意识:超越认知反剖析

S. Delacroix, Michael Veale
{"title":"智能技术与我们的自我意识:超越认知反剖析","authors":"S. Delacroix, Michael Veale","doi":"10.4337/9781788972000.00011","DOIUrl":null,"url":null,"abstract":"This chapter focuses on the extent to which sophisticated profiling techniques may end up undermining, rather than enhancing, our capacity for ethical agency. This capacity demands both opacity respect—preserving a gap between the self we present and the self we conceal—and an ability to call into question practices that are ethically wanting. Pushed to its limit, the smooth optimisation of our environment may prevent us from experiencing many of the tensions that otherwise prompt us to reconsider accepted practices. An optimally personalised world may not ever call for any ‘action’ as Hannah Arendt describes it.Can systems be designed to personalise responsibly? Greater time and research needs to be invested in designing a range of viable ‘perspective widening’ tools, as many such tools either burden users with little guarantee of meaningful engagement, or underestimate the extent to which individuals’ preferences are themselves malleable. Any approach that tries to predict what users might like, or what might change their views, risks the same pitfalls as any other form of personalisation. Instead, we argue that the most promising avenue is to push for diverse uses of newly developed systems, and measure those systems’ success at least partly on that basis. Inviting appropriation and repurposing would help keep users engaged in systems of data collection and profiling. This will not be a straightforward task: sometimes it will be in tension with traditional measures of success and performance. Yet the increasing integration of algorithmic systems in society requires us to widen our understanding of agency beyond a narrow, decontextualised focus on passive consumption preferences.","PeriodicalId":136014,"journal":{"name":"Sustainable Technology eJournal","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Smart Technologies and Our Sense of Self: Going Beyond Epistemic Counter-Profiling\",\"authors\":\"S. Delacroix, Michael Veale\",\"doi\":\"10.4337/9781788972000.00011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This chapter focuses on the extent to which sophisticated profiling techniques may end up undermining, rather than enhancing, our capacity for ethical agency. This capacity demands both opacity respect—preserving a gap between the self we present and the self we conceal—and an ability to call into question practices that are ethically wanting. Pushed to its limit, the smooth optimisation of our environment may prevent us from experiencing many of the tensions that otherwise prompt us to reconsider accepted practices. An optimally personalised world may not ever call for any ‘action’ as Hannah Arendt describes it.Can systems be designed to personalise responsibly? Greater time and research needs to be invested in designing a range of viable ‘perspective widening’ tools, as many such tools either burden users with little guarantee of meaningful engagement, or underestimate the extent to which individuals’ preferences are themselves malleable. Any approach that tries to predict what users might like, or what might change their views, risks the same pitfalls as any other form of personalisation. Instead, we argue that the most promising avenue is to push for diverse uses of newly developed systems, and measure those systems’ success at least partly on that basis. Inviting appropriation and repurposing would help keep users engaged in systems of data collection and profiling. This will not be a straightforward task: sometimes it will be in tension with traditional measures of success and performance. Yet the increasing integration of algorithmic systems in society requires us to widen our understanding of agency beyond a narrow, decontextualised focus on passive consumption preferences.\",\"PeriodicalId\":136014,\"journal\":{\"name\":\"Sustainable Technology eJournal\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-04-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sustainable Technology eJournal\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.4337/9781788972000.00011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sustainable Technology eJournal","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4337/9781788972000.00011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

摘要

这一章的重点是复杂的分析技术可能最终削弱而不是增强我们的道德代理能力的程度。这种能力既需要不透明的尊重——保持我们所呈现的自我和我们所隐藏的自我之间的差距——也需要对道德上缺乏的行为提出质疑的能力。如果将环境的顺利优化推到极限,可能会阻止我们经历许多紧张局势,否则这些紧张局势会促使我们重新考虑已接受的做法。正如汉娜·阿伦特所描述的那样,一个最理想的个性化世界可能永远不需要任何“行动”。系统能被设计成负责任的个性化吗?我们需要投入更多的时间和研究去设计一系列可行的“视角拓宽”工具,因为许多此类工具要么无法保证用户能够真正参与其中,要么低估了个人偏好的可延展性。任何试图预测用户可能喜欢什么,或者什么可能改变他们的观点的方法,都面临着与任何其他形式的个性化一样的陷阱。相反,我们认为最有希望的途径是推动新开发系统的多样化使用,并至少部分地在此基础上衡量这些系统的成功。邀请挪用和重新利用将有助于保持用户参与数据收集和分析系统。这不是一项直截了当的任务:有时它会与传统的成功和业绩衡量标准相冲突。然而,随着社会中算法系统的日益整合,我们需要扩大对代理的理解,而不是局限于被动的消费偏好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Smart Technologies and Our Sense of Self: Going Beyond Epistemic Counter-Profiling
This chapter focuses on the extent to which sophisticated profiling techniques may end up undermining, rather than enhancing, our capacity for ethical agency. This capacity demands both opacity respect—preserving a gap between the self we present and the self we conceal—and an ability to call into question practices that are ethically wanting. Pushed to its limit, the smooth optimisation of our environment may prevent us from experiencing many of the tensions that otherwise prompt us to reconsider accepted practices. An optimally personalised world may not ever call for any ‘action’ as Hannah Arendt describes it.Can systems be designed to personalise responsibly? Greater time and research needs to be invested in designing a range of viable ‘perspective widening’ tools, as many such tools either burden users with little guarantee of meaningful engagement, or underestimate the extent to which individuals’ preferences are themselves malleable. Any approach that tries to predict what users might like, or what might change their views, risks the same pitfalls as any other form of personalisation. Instead, we argue that the most promising avenue is to push for diverse uses of newly developed systems, and measure those systems’ success at least partly on that basis. Inviting appropriation and repurposing would help keep users engaged in systems of data collection and profiling. This will not be a straightforward task: sometimes it will be in tension with traditional measures of success and performance. Yet the increasing integration of algorithmic systems in society requires us to widen our understanding of agency beyond a narrow, decontextualised focus on passive consumption preferences.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信