A. Smart, Larry James, B. Hutchinson, Simone Wu, Shannon Vallor
{"title":"Why Reliabilism Is not Enough: Epistemic and Moral Justification in Machine Learning","authors":"A. Smart, Larry James, B. Hutchinson, Simone Wu, Shannon Vallor","doi":"10.1145/3375627.3375866","DOIUrl":null,"url":null,"abstract":"In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of \\em justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed wide adoption of machine learning? We argue that, in general, people implicitly adoptreliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method \\citegoldman2012reliabilism. We argue that, in cases where model deployments require \\em moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral \"wrapper'' around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification---moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"63 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3375866","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
In this paper we argue that standard calls for explainability that focus on the epistemic inscrutability of black-box machine learning models may be misplaced. If we presume, for the sake of this paper, that machine learning can be a source of knowledge, then it makes sense to wonder what kind of \em justification it involves. How do we rationalize on the one hand the seeming justificatory black box with the observed wide adoption of machine learning? We argue that, in general, people implicitly adoptreliabilism regarding machine learning. Reliabilism is an epistemological theory of epistemic justification according to which a belief is warranted if it has been produced by a reliable process or method \citegoldman2012reliabilism. We argue that, in cases where model deployments require \em moral justification, reliabilism is not sufficient, and instead justifying deployment requires establishing robust human processes as a moral "wrapper'' around machine outputs. We then suggest that, in certain high-stakes domains with moral consequences, reliabilism does not provide another kind of necessary justification---moral justification. Finally, we offer cautions relevant to the (implicit or explicit) adoption of the reliabilist interpretation of machine learning.