{"title":"What does it mean to be good? The normative and metaethical problem with ‘AI for good’","authors":"Tom Stenson","doi":"10.1007/s43681-024-00501-x","DOIUrl":null,"url":null,"abstract":"<div><p>Using AI for good is an imperative for its development and regulation, but what exactly does it mean? This article contends that ‘AI for good’ is a powerful normative concept and is problematic for the ethics of AI because it oversimplifies complex philosophical questions in defining good and assumes a level of moral knowledge and certainty that may not be justified. ‘AI for good’ expresses a value judgement on what AI should be and its role in society, thereby functioning as a normative concept in AI ethics. As a moral statement, AI for good makes two things implicit: i) <i>we know what a good outcome is</i> and ii) <i>we know the process by which to achieve it</i>. By examining these two claims, this article will articulate the thesis that ‘AI for good’ should be examined as a <i>normative</i> and <i>metaethical</i> problem for AI ethics. Furthermore, it argues that we need to pay more attention to our relationship with normativity and how it guides what we believe the ‘work’ of ethical AI should be.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"5 2","pages":"1561 - 1570"},"PeriodicalIF":0.0000,"publicationDate":"2024-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-024-00501-x","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Using AI for good is an imperative for its development and regulation, but what exactly does it mean? This article contends that ‘AI for good’ is a powerful normative concept and is problematic for the ethics of AI because it oversimplifies complex philosophical questions in defining good and assumes a level of moral knowledge and certainty that may not be justified. ‘AI for good’ expresses a value judgement on what AI should be and its role in society, thereby functioning as a normative concept in AI ethics. As a moral statement, AI for good makes two things implicit: i) we know what a good outcome is and ii) we know the process by which to achieve it. By examining these two claims, this article will articulate the thesis that ‘AI for good’ should be examined as a normative and metaethical problem for AI ethics. Furthermore, it argues that we need to pay more attention to our relationship with normativity and how it guides what we believe the ‘work’ of ethical AI should be.
将人工智能用于善是其发展和监管的必要条件,但这究竟意味着什么?本文认为“AI for good”是一个强大的规范性概念,对于AI的伦理来说是有问题的,因为它过度简化了定义“善”的复杂哲学问题,并假设了一种道德知识和确定性的水平,这可能是不合理的。“人工智能为善”表达了对人工智能应该是什么以及它在社会中的作用的价值判断,从而成为人工智能伦理的规范概念。作为一种道德声明,人工智能有两件事是隐含的:1)我们知道什么是好的结果,2)我们知道实现它的过程。通过研究这两种说法,本文将阐明“人工智能为善”应该作为人工智能伦理的规范和元伦理问题来研究的论点。此外,它认为我们需要更多地关注我们与规范性的关系,以及它如何指导我们认为道德人工智能的“工作”应该是什么。