算法管理:风险管理框架

Inf. Polity Pub Date : 2020-12-04 DOI:10.3233/ip-200249
F. Bannister, R. Connolly
{"title":"算法管理:风险管理框架","authors":"F. Bannister, R. Connolly","doi":"10.3233/ip-200249","DOIUrl":null,"url":null,"abstract":"Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.","PeriodicalId":418875,"journal":{"name":"Inf. Polity","volume":"C-19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"18","resultStr":"{\"title\":\"Administration by algorithm: A risk management framework\",\"authors\":\"F. Bannister, R. Connolly\",\"doi\":\"10.3233/ip-200249\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.\",\"PeriodicalId\":418875,\"journal\":{\"name\":\"Inf. Polity\",\"volume\":\"C-19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"18\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Inf. Polity\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/ip-200249\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Inf. Polity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/ip-200249","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 18

摘要

算法决策既不是最近才出现的现象,也不一定与人工智能(AI)有关,尽管人工智能的进步正越来越多地导致迄今为止人类的决策被算法和机器学习等技术所取代,或变得依赖于这些技术。这些发展带来了许多潜在的好处,但也并非没有一定的风险。这些风险并不总是被很好地理解。这不仅仅是机器犯错的问题;在软件中嵌入的价值观、偏见和偏见会对社会中的个人和群体造成歧视。这种偏见往往很难发现或证明,特别是在透明度和问责制存在问题以及此类系统外包给私营部门的情况下。因此,能够发现和分类这些风险对于制定系统和校准的应对措施至关重要。本文提出了公共部门决策算法的简单分类,并利用该分类构建了一个风险管理框架,该框架包含若干组成部分,包括问责结构和监管治理。该框架旨在帮助有兴趣确保人工智能在公共领域的结构化问责制和法律监管的学者和从业者。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Administration by algorithm: A risk management framework
Algorithmic decision-making is neither a recent phenomenon nor one necessarily associated with artificial intelligence (AI), though advances in AI are increasingly resulting in what were heretofore human decisions being taken over by, or becoming dependent on, algorithms and technologies like machine learning. Such developments promise many potential benefits, but are not without certain risks. These risks are not always well understood. It is not just a question of machines making mistakes; it is the embedding of values, biases and prejudices in software which can discriminate against both individuals and groups in society. Such biases are often hard either to detect or prove, particularly where there are problems with transparency and accountability and where such systems are outsourced to the private sector. Consequently, being able to detect and categorise these risks is essential in order to develop a systematic and calibrated response. This paper proposes a simple taxonomy of decision-making algorithms in the public sector and uses this to build a risk management framework with a number of components including an accountability structure and regulatory governance. This framework is designed to assist scholars and practitioners interested in ensuring structured accountability and legal regulation of AI in the public sphere.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信