{"title":"Are Algorithms Value-Free?","authors":"Gabbrielle M. Johnson","doi":"10.1163/17455243-20234372","DOIUrl":null,"url":null,"abstract":"As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them.","PeriodicalId":51879,"journal":{"name":"Journal of Moral Philosophy","volume":"139 2 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Moral Philosophy","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1163/17455243-20234372","RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 0
Abstract
As inductive decision-making procedures, the inferences made by machine learning programs are subject to underdetermination by evidence and bear inductive risk. One strategy for overcoming these challenges is guided by a presumption in philosophy of science that inductive inferences can and should be value-free. Applied to machine learning programs, the strategy assumes that the influence of values is restricted to data and decision outcomes, thereby omitting internal value-laden design choice points. In this paper, I apply arguments from feminist philosophy of science to machine learning programs to make the case that the resources required to respond to these inductive challenges render critical aspects of their design constitutively value-laden. I demonstrate these points specifically in the case of recidivism algorithms, arguing that contemporary debates concerning fairness in criminal justice risk-assessment programs are best understood as iterations of traditional arguments from inductive risk and demarcation, and thereby establish the value-laden nature of automated decision-making programs. Finally, in light of these points, I address opportunities for relocating the value-free ideal in machine learning and the limitations that accompany them.
期刊介绍:
The Journal of Moral Philosophy is a peer-reviewed journal of moral, political and legal philosophy with an international focus. It publishes articles in all areas of normative philosophy, including pure and applied ethics, as well as moral, legal, and political theory. Articles exploring non-Western traditions are also welcome. The Journal seeks to promote lively discussions and debates for established academics and the wider community, by publishing articles that avoid unnecessary jargon without sacrificing academic rigour. It encourages contributions from newer members of the philosophical community. The Journal of Moral Philosophy is published four times a year, in January, April, July and October.