法律政府而非机器政府

IF 1.6 3区 社会学 Q1 LAW
E. Berman
{"title":"法律政府而非机器政府","authors":"E. Berman","doi":"10.2139/SSRN.3098995","DOIUrl":null,"url":null,"abstract":"The technological tool du jour is known as “machine learning,” a powerful form of data mining that uses mathematical algorithms to construct computer models that provide hidden insights by extracting patterns from enormous historical data sets, often for the purpose of making predictions about the future. Machine learning is all around us — it is used for spam filters, facial recognition, the detection of bank fraud and much more — and it is immensely powerful. It can analyze enormous amounts of information and extract relationships in the data that no human would ever discover. Despite its promise, there are reasons to remain skeptical of using machine learning predictions. Existing critiques of machine learning usually focus on one of two types of concerns — one identifies and aims to address the many potential pitfalls that might result in inaccurate models and the other assesses machine learning’s consistency with norms such as transparency, accountability, and due process. This paper takes a step back from the nuts and bolts questions surrounding the implementation of predictive analytics to consider whether and when it is appropriate to use machine learning to make government decisions in the contexts of national security and law enforcement. It argues that certain characteristics of machine-learning generate tensions with rule-of-law principles and that, as a result, machine-learning predictions can be valuable instruments in some decision-making contexts but constitute a threat to fundamental values in others. The paper concludes that government actors should exploit the benefits of machine learning when they enjoy broad decision-making discretion in making decisions, while eschewing it when government discretion is highly constrained.","PeriodicalId":47323,"journal":{"name":"Boston University Law Review","volume":"98 1","pages":"1277"},"PeriodicalIF":1.6000,"publicationDate":"2018-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2139/SSRN.3098995","citationCount":"22","resultStr":"{\"title\":\"A Government of Laws and Not of Machines\",\"authors\":\"E. Berman\",\"doi\":\"10.2139/SSRN.3098995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The technological tool du jour is known as “machine learning,” a powerful form of data mining that uses mathematical algorithms to construct computer models that provide hidden insights by extracting patterns from enormous historical data sets, often for the purpose of making predictions about the future. Machine learning is all around us — it is used for spam filters, facial recognition, the detection of bank fraud and much more — and it is immensely powerful. It can analyze enormous amounts of information and extract relationships in the data that no human would ever discover. Despite its promise, there are reasons to remain skeptical of using machine learning predictions. Existing critiques of machine learning usually focus on one of two types of concerns — one identifies and aims to address the many potential pitfalls that might result in inaccurate models and the other assesses machine learning’s consistency with norms such as transparency, accountability, and due process. This paper takes a step back from the nuts and bolts questions surrounding the implementation of predictive analytics to consider whether and when it is appropriate to use machine learning to make government decisions in the contexts of national security and law enforcement. It argues that certain characteristics of machine-learning generate tensions with rule-of-law principles and that, as a result, machine-learning predictions can be valuable instruments in some decision-making contexts but constitute a threat to fundamental values in others. The paper concludes that government actors should exploit the benefits of machine learning when they enjoy broad decision-making discretion in making decisions, while eschewing it when government discretion is highly constrained.\",\"PeriodicalId\":47323,\"journal\":{\"name\":\"Boston University Law Review\",\"volume\":\"98 1\",\"pages\":\"1277\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2018-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.2139/SSRN.3098995\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Boston University Law Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3098995\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Boston University Law Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.2139/SSRN.3098995","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
引用次数: 22

摘要

目前的技术工具被称为“机器学习”,这是一种强大的数据挖掘形式,它使用数学算法构建计算机模型,通过从庞大的历史数据集中提取模式来提供隐藏的见解,通常是为了预测未来。机器学习无处不在——它被用于垃圾邮件过滤器、面部识别、银行欺诈检测等等——而且它非常强大。它可以分析大量信息,并从数据中提取出人类永远不会发现的关系。尽管它很有前景,但仍有理由对使用机器学习预测持怀疑态度。现有的对机器学习的批评通常集中在两种担忧中的一种——一种是识别并旨在解决可能导致模型不准确的许多潜在陷阱,另一种是评估机器学习与透明度、问责制和正当程序等规范的一致性。本文从围绕预测分析实施的细节问题后退了一步,以考虑在国家安全和执法的背景下,使用机器学习做出政府决策是否合适以及何时合适。它认为,机器学习的某些特征会与法治原则产生紧张关系,因此,机器学习预测在某些决策环境中可能是有价值的工具,但在其他决策环境中对基本价值构成威胁。该论文的结论是,当政府行为者在决策中享有广泛的决策自由裁量权时,他们应该利用机器学习的好处,而当政府自由裁量权受到高度约束时,他们应该避开机器学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Government of Laws and Not of Machines
The technological tool du jour is known as “machine learning,” a powerful form of data mining that uses mathematical algorithms to construct computer models that provide hidden insights by extracting patterns from enormous historical data sets, often for the purpose of making predictions about the future. Machine learning is all around us — it is used for spam filters, facial recognition, the detection of bank fraud and much more — and it is immensely powerful. It can analyze enormous amounts of information and extract relationships in the data that no human would ever discover. Despite its promise, there are reasons to remain skeptical of using machine learning predictions. Existing critiques of machine learning usually focus on one of two types of concerns — one identifies and aims to address the many potential pitfalls that might result in inaccurate models and the other assesses machine learning’s consistency with norms such as transparency, accountability, and due process. This paper takes a step back from the nuts and bolts questions surrounding the implementation of predictive analytics to consider whether and when it is appropriate to use machine learning to make government decisions in the contexts of national security and law enforcement. It argues that certain characteristics of machine-learning generate tensions with rule-of-law principles and that, as a result, machine-learning predictions can be valuable instruments in some decision-making contexts but constitute a threat to fundamental values in others. The paper concludes that government actors should exploit the benefits of machine learning when they enjoy broad decision-making discretion in making decisions, while eschewing it when government discretion is highly constrained.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.30
自引率
5.90%
发文量
0
期刊介绍: The Boston University Law Review provides analysis and commentary on all areas of the law. Published six times a year, the Law Review contains articles contributed by law professors and practicing attorneys from all over the world, along with notes written by student members.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信