通过有原则的稳健行动解决算法不透明的重大挑战

Alexander Buhmann, Christian Fieseler
{"title":"通过有原则的稳健行动解决算法不透明的重大挑战","authors":"Alexander Buhmann, Christian Fieseler","doi":"10.5771/2747-5174-2021-1-74","DOIUrl":null,"url":null,"abstract":"Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.","PeriodicalId":105767,"journal":{"name":"Morals & Machines","volume":"384 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Tackling the Grand Challenge of Algorithmic Opacity Through Principled Robust Action\",\"authors\":\"Alexander Buhmann, Christian Fieseler\",\"doi\":\"10.5771/2747-5174-2021-1-74\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.\",\"PeriodicalId\":105767,\"journal\":{\"name\":\"Morals & Machines\",\"volume\":\"384 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Morals & Machines\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.5771/2747-5174-2021-1-74\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Morals & Machines","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.5771/2747-5174-2021-1-74","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

组织越来越多地将代理委托给人工智能。然而,这种系统可能产生意想不到的负面影响,因为它们可能产生对用户的偏见或加剧社会不公正。然而,让它们成为独特的重大挑战的,不是它们潜在的问题结果,而是它们流畅的设计。机器学习算法在不断发展;因此,它们的功能经常对人类来说是不透明的。在本文中,我们运用最近的工作,通过强有力的行动来应对重大挑战,以评估管理算法不透明挑战的潜力和障碍。我们强调,虽然这一办法是富有成效的,但它可以得到关于解决办法的问责制和合法性的讨论的有益补充。在我们的讨论中,我们扩展了强有力的行动方法,将其与一套原则联系起来,这些原则可以用来评估应对重大挑战的组织方法,以及它们在算法不透明的复杂条件下培养负责任结果的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Tackling the Grand Challenge of Algorithmic Opacity Through Principled Robust Action
Organizations increasingly delegate agency to artificial intelligence. However, such systems can yield unintended negative effects as they may produce biases against users or reinforce social injustices. What pronounces them as a unique grand challenge, however, are not their potentially problematic outcomes but their fluid design. Machine learning algorithms are continuously evolving; as a result, their functioning frequently remains opaque to humans. In this article, we apply recent work on tackling grand challenges though robust action to assess the potential and obstacles of managing the challenge of algorithmic opacity. We stress that although this approach is fruitful, it can be gainfully complemented by a discussion regarding the accountability and legitimacy of solutions. In our discussion, we extend the robust action approach by linking it to a set of principles that can serve to evaluate organisational approaches of tackling grand challenges with respect to their ability to foster accountable outcomes under the intricate conditions of algorithmic opacity.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信