A Neo-Republican Critique of AI ethics

Jonne Maas
{"title":"A Neo-Republican Critique of AI ethics","authors":"Jonne Maas","doi":"10.1016/j.jrt.2021.100022","DOIUrl":null,"url":null,"abstract":"<div><p>The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659621000159/pdfft?md5=7daecef4049ab13fc8e727405845c76d&pid=1-s2.0-S2666659621000159-main.pdf","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of responsible technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666659621000159","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

The AI Ethics literature, aimed to responsibly develop AI systems, widely agrees on the fact that society is in dire need for effective accountability mechanisms with regards to AI systems. Particularly, machine learning (ML) systems cause reason for concern due to their opaque and self-learning characteristics. Nevertheless, what such accountability mechanisms should look like remains either largely unspecified (e.g., ‘stakeholder input’) or ineffective (e.g., ‘ethical guidelines’). In this paper, I argue that the difficulty to formulate and develop effective accountability mechanisms lies partly in the predominant focus on Mill's harm's principle, rooted in the conception of freedom as non-interference. A strong focus on harm overcasts other moral wrongs, such as potentially problematic power dynamics between those who shape the system and those affected by it. I propose that the neo-republican conception of freedom as non-domination provides a suitable framework to inform responsible ML development. Domination, understood by neo-republicans, is a moral wrong as it undermines the potential for human flourishing. In order to mitigate domination, neo-republicans plead for accountability mechanisms that minimize arbitrary relations of power. Neo-republicanism should hence inform responsible ML development as it provides substantive and concrete grounds when accountability mechanisms are effective (i.e. when they are non-dominating).

新共和主义对人工智能伦理的批判
旨在负责任地开发人工智能系统的人工智能伦理文献广泛同意这样一个事实,即社会迫切需要关于人工智能系统的有效问责机制。特别是,机器学习(ML)系统由于其不透明和自学习的特性而引起关注。然而,这种问责机制应该是什么样子,在很大程度上仍然是不明确的(例如,“利益相关者的投入”)或无效的(例如,“道德准则”)。在本文中,我认为制定和发展有效问责机制的困难部分在于主要关注密尔的伤害原则,该原则植根于不干涉自由的概念。对伤害的强烈关注掩盖了其他道德错误,例如塑造系统的人和受系统影响的人之间可能存在问题的权力动态。我认为,新共和主义的自由概念是非统治的,为负责任的机器学习开发提供了一个合适的框架。在新共和党人看来,统治是一种道德错误,因为它破坏了人类繁荣的潜力。为了减轻统治,新共和党人呼吁建立问责机制,最大限度地减少权力的任意关系。因此,新共和主义应该为负责任的ML开发提供信息,因为当问责机制有效时(即当它们不占主导地位时),它提供了实质性和具体的依据。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of responsible technology
Journal of responsible technology Information Systems, Artificial Intelligence, Human-Computer Interaction
CiteScore
3.60
自引率
0.00%
发文量
0
审稿时长
168 days
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信