{"title":"An Institutionalist Approach to AI Ethics: Justifying the Priority of Government Regulation over Self-Regulation","authors":"Thomas Ferretti","doi":"10.1515/mopp-2020-0056","DOIUrl":null,"url":null,"abstract":"Abstract This article explores the cooperation of government and the private sector to tackle the ethical dimension of artificial intelligence (AI). The argument draws on the institutionalist approach in philosophy and business ethics defending a ‘division of moral labor’ between governments and the private sector (Rawls 2001; Scheffler and Munoz-Dardé 2005). The goal and main contribution of this article is to explain how this approach can provide ethical guidelines to the AI industry and to highlight the limits of self-regulation. In what follows, I discuss three institutionalist claims. First, principles of AI ethics should be validated through legitimate democratic processes. Second, compliance with these principles should be secured in a stable way. Third, their implementation in practice should be as efficient as possible. If we accept these claims, there are good reasons to conclude that, in many cases, governments implementing hard regulation are in principle (if not yet in practice) the best instruments to secure an ethical development of AI systems. Where adequate regulation exists, firms should respect the law. But when regulation does not yet exist, helping governments build adequate regulation should be businesses’ ethical priority, not self-regulation.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/mopp-2020-0056","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8
Abstract
Abstract This article explores the cooperation of government and the private sector to tackle the ethical dimension of artificial intelligence (AI). The argument draws on the institutionalist approach in philosophy and business ethics defending a ‘division of moral labor’ between governments and the private sector (Rawls 2001; Scheffler and Munoz-Dardé 2005). The goal and main contribution of this article is to explain how this approach can provide ethical guidelines to the AI industry and to highlight the limits of self-regulation. In what follows, I discuss three institutionalist claims. First, principles of AI ethics should be validated through legitimate democratic processes. Second, compliance with these principles should be secured in a stable way. Third, their implementation in practice should be as efficient as possible. If we accept these claims, there are good reasons to conclude that, in many cases, governments implementing hard regulation are in principle (if not yet in practice) the best instruments to secure an ethical development of AI systems. Where adequate regulation exists, firms should respect the law. But when regulation does not yet exist, helping governments build adequate regulation should be businesses’ ethical priority, not self-regulation.
本文探讨了政府和私营部门合作解决人工智能(AI)的伦理维度。该论点借鉴了哲学和商业伦理中的制度主义方法,捍卫政府和私营部门之间的“道德劳动分工”(罗尔斯2001;Scheffler and munoz - dardough(2005)。本文的目标和主要贡献是解释这种方法如何为人工智能行业提供道德准则,并强调自我监管的局限性。接下来,我将讨论三个制度主义者的主张。首先,人工智能伦理原则应通过合法的民主程序得到验证。第二,以稳定的方式确保这些原则得到遵守。第三,在实践中尽可能高效地执行。如果我们接受这些说法,我们有充分的理由得出这样的结论:在许多情况下,实施硬性监管的政府在原则上(如果尚未在实践中)是确保人工智能系统道德发展的最佳工具。在有足够监管的地方,公司应该尊重法律。但在监管尚不存在的情况下,帮助政府建立充分的监管应该是企业的道德优先事项,而不是自我监管。