智慧城市的算法透明度

R. Brauneis, Ellen P. Goodman
{"title":"智慧城市的算法透明度","authors":"R. Brauneis, Ellen P. Goodman","doi":"10.2139/SSRN.3012499","DOIUrl":null,"url":null,"abstract":"Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. \nWe set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. \nTo do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain.","PeriodicalId":171535,"journal":{"name":"LSN: Rights & Liberties (Topic)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"162","resultStr":"{\"title\":\"Algorithmic Transparency for the Smart City\",\"authors\":\"R. Brauneis, Ellen P. Goodman\",\"doi\":\"10.2139/SSRN.3012499\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. \\nWe set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. \\nTo do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain.\",\"PeriodicalId\":171535,\"journal\":{\"name\":\"LSN: Rights & Liberties (Topic)\",\"volume\":\"249 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-08-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"162\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"LSN: Rights & Liberties (Topic)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3012499\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"LSN: Rights & Liberties (Topic)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.3012499","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 162

摘要

许多学科都出现了关于算法伦理的问题——关于人工智能和大数据分析中嵌入的价值观,这些价值观正日益取代人类的决策。许多人担心,算法社会太不透明,无法对其行为负责。一个人可能会被拒绝假释或拒绝贷款,被解雇或不被雇用,原因她永远不会知道,也无法说清楚。在公共部门,算法决策的不透明性尤其有问题,因为政府决策可能特别重要,而且民主选举产生的政府承担着特殊的问责责任。调查记者最近揭露了刑事司法领域使用的算法程序的危险的不可理解性——危险的原因是,它们做出的预测可能是错误的,也可能是不公平的,而谁也不知道。我们开始测试政府部署大数据分析的透明度限制,重点调查地方和州政府使用预测算法的情况。正是在这里,在地方政府中,算法决定的决策可以产生最直接的影响。正是在这里,捉襟见肘的机构最有可能将分析结果交给私人供应商,后者可能会在客户机构、公众或两者的视线之外做出设计和政策选择。为了了解由此产生的“黑匣子”算法有多难以破解,我们在23个州提交了42份公开记录请求,寻求有关6个预测算法程序的基本信息。我们选择了使用最广泛、评价最好的项目,包括由营利性公司、非营利组织和学术/私营部门合作开发的项目。我们的目标是看看,通过使用公开记录过程,我们是否可以发现这些算法所体现的政策判断,并评估它们的效用和公平性。为了完成这项工作,我们确定了有意义的“算法透明度”需要什么。我们发现,在几乎所有情况下,都没有提供。过于宽泛的商业保密主张是一个问题。但与传统观点相反,它们并不是最大的障碍。为了显著增加透明度,通常没有必要发布用于执行预测模型的代码。我们得出的结论是,公开部署的算法只有在以下情况下才足够透明:(1)政府为算法过程和随后的实施和验证的目标生成适当的记录;(2)政府承包商向公共机构披露他们如何开发算法的充分信息;(3)公共机构和法院将商业秘密索赔视为法律要求的公共披露的有限例外。尽管这需要一个多方利益相关者的过程来制定记录生成和披露的最佳实践,但我们提出了我们认为理想情况下这些记录应该包含的八种主要信息类型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Algorithmic Transparency for the Smart City
Emerging across many disciplines are questions about algorithmic ethics – about the values embedded in artificial intelligence and big data analytics that increasingly replace human decisionmaking. Many are concerned that an algorithmic society is too opaque to be accountable for its behavior. An individual can be denied parole or denied credit, fired or not hired for reasons she will never know and cannot be articulated. In the public sector, the opacity of algorithmic decisionmaking is particularly problematic both because governmental decisions may be especially weighty, and because democratically-elected governments bear special duties of accountability. Investigative journalists have recently exposed the dangerous impenetrability of algorithmic processes used in the criminal justice field – dangerous because the predictions they make can be both erroneous and unfair, with none the wiser. We set out to test the limits of transparency around governmental deployment of big data analytics, focusing our investigation on local and state government use of predictive algorithms. It is here, in local government, that algorithmically-determined decisions can be most directly impactful. And it is here that stretched agencies are most likely to hand over the analytics to private vendors, which may make design and policy choices out of the sight of the client agencies, the public, or both. To see just how impenetrable the resulting “black box” algorithms are, we filed 42 open records requests in 23 states seeking essential information about six predictive algorithm programs. We selected the most widely-used and well-reviewed programs, including those developed by for-profit companies, nonprofits, and academic/private sector partnerships. The goal was to see if, using the open records process, we could discover what policy judgments these algorithms embody, and could evaluate their utility and fairness. To do this work, we identified what meaningful “algorithmic transparency” entails. We found that in almost every case, it wasn’t provided. Over-broad assertions of trade secrecy were a problem. But contrary to conventional wisdom, they were not the biggest obstacle. It will not usually be necessary to release the code used to execute predictive models in order to dramatically increase transparency. We conclude that publicly-deployed algorithms will be sufficiently transparent only if (1) governments generate appropriate records about their objectives for algorithmic processes and subsequent implementation and validation; (2) government contractors reveal to the public agency sufficient information about how they developed the algorithm; and (3) public agencies and courts treat trade secrecy claims as the limited exception to public disclosure that the law requires. Although it would require a multi-stakeholder process to develop best practices for record generation and disclosure, we present what we believe are eight principal types of information that such records should ideally contain.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信