{"title":"法律政府而非机器政府","authors":"E. Berman","doi":"10.2139/SSRN.3098995","DOIUrl":null,"url":null,"abstract":"The technological tool du jour is known as “machine learning,” a powerful form of data mining that uses mathematical algorithms to construct computer models that provide hidden insights by extracting patterns from enormous historical data sets, often for the purpose of making predictions about the future. Machine learning is all around us — it is used for spam filters, facial recognition, the detection of bank fraud and much more — and it is immensely powerful. It can analyze enormous amounts of information and extract relationships in the data that no human would ever discover. Despite its promise, there are reasons to remain skeptical of using machine learning predictions. Existing critiques of machine learning usually focus on one of two types of concerns — one identifies and aims to address the many potential pitfalls that might result in inaccurate models and the other assesses machine learning’s consistency with norms such as transparency, accountability, and due process. This paper takes a step back from the nuts and bolts questions surrounding the implementation of predictive analytics to consider whether and when it is appropriate to use machine learning to make government decisions in the contexts of national security and law enforcement. It argues that certain characteristics of machine-learning generate tensions with rule-of-law principles and that, as a result, machine-learning predictions can be valuable instruments in some decision-making contexts but constitute a threat to fundamental values in others. The paper concludes that government actors should exploit the benefits of machine learning when they enjoy broad decision-making discretion in making decisions, while eschewing it when government discretion is highly constrained.","PeriodicalId":47323,"journal":{"name":"Boston University Law Review","volume":"98 1","pages":"1277"},"PeriodicalIF":1.6000,"publicationDate":"2018-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.2139/SSRN.3098995","citationCount":"22","resultStr":"{\"title\":\"A Government of Laws and Not of Machines\",\"authors\":\"E. Berman\",\"doi\":\"10.2139/SSRN.3098995\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The technological tool du jour is known as “machine learning,” a powerful form of data mining that uses mathematical algorithms to construct computer models that provide hidden insights by extracting patterns from enormous historical data sets, often for the purpose of making predictions about the future. Machine learning is all around us — it is used for spam filters, facial recognition, the detection of bank fraud and much more — and it is immensely powerful. It can analyze enormous amounts of information and extract relationships in the data that no human would ever discover. Despite its promise, there are reasons to remain skeptical of using machine learning predictions. Existing critiques of machine learning usually focus on one of two types of concerns — one identifies and aims to address the many potential pitfalls that might result in inaccurate models and the other assesses machine learning’s consistency with norms such as transparency, accountability, and due process. This paper takes a step back from the nuts and bolts questions surrounding the implementation of predictive analytics to consider whether and when it is appropriate to use machine learning to make government decisions in the contexts of national security and law enforcement. It argues that certain characteristics of machine-learning generate tensions with rule-of-law principles and that, as a result, machine-learning predictions can be valuable instruments in some decision-making contexts but constitute a threat to fundamental values in others. The paper concludes that government actors should exploit the benefits of machine learning when they enjoy broad decision-making discretion in making decisions, while eschewing it when government discretion is highly constrained.\",\"PeriodicalId\":47323,\"journal\":{\"name\":\"Boston University Law Review\",\"volume\":\"98 1\",\"pages\":\"1277\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2018-01-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.2139/SSRN.3098995\",\"citationCount\":\"22\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Boston University Law Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.3098995\",\"RegionNum\":3,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"LAW\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Boston University Law Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.2139/SSRN.3098995","RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"LAW","Score":null,"Total":0}
The technological tool du jour is known as “machine learning,” a powerful form of data mining that uses mathematical algorithms to construct computer models that provide hidden insights by extracting patterns from enormous historical data sets, often for the purpose of making predictions about the future. Machine learning is all around us — it is used for spam filters, facial recognition, the detection of bank fraud and much more — and it is immensely powerful. It can analyze enormous amounts of information and extract relationships in the data that no human would ever discover. Despite its promise, there are reasons to remain skeptical of using machine learning predictions. Existing critiques of machine learning usually focus on one of two types of concerns — one identifies and aims to address the many potential pitfalls that might result in inaccurate models and the other assesses machine learning’s consistency with norms such as transparency, accountability, and due process. This paper takes a step back from the nuts and bolts questions surrounding the implementation of predictive analytics to consider whether and when it is appropriate to use machine learning to make government decisions in the contexts of national security and law enforcement. It argues that certain characteristics of machine-learning generate tensions with rule-of-law principles and that, as a result, machine-learning predictions can be valuable instruments in some decision-making contexts but constitute a threat to fundamental values in others. The paper concludes that government actors should exploit the benefits of machine learning when they enjoy broad decision-making discretion in making decisions, while eschewing it when government discretion is highly constrained.
期刊介绍:
The Boston University Law Review provides analysis and commentary on all areas of the law. Published six times a year, the Law Review contains articles contributed by law professors and practicing attorneys from all over the world, along with notes written by student members.