自下而上的人工道德代理不能做出道德判断吗?

Robert James M. Boyles
{"title":"自下而上的人工道德代理不能做出道德判断吗?","authors":"Robert James M. Boyles","doi":"10.6001/fil-soc.2024.35.1.3","DOIUrl":null,"url":null,"abstract":"\n \n \nThis article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the foundational level, their moral reasoning capabilities are prone to the no-ought-from-is thesis. The main hypothesis of the present work is that, by conceptually analysing the notion of bottom-up artificial moral agents, it would be revealed that their seeming moral judgements do not have proper philosophical basis. For one, the said kinds of artifacts arrive at the understanding of ethically-relevant ideas by means of culling data or facts from the environment. Thus, in relation to the is-ought problem, it may be argued that, even if bottom-up systems seem prima facie capable of generating apparent moral judgments, such are actually absent of good moral grounding, if not empty of any ethical value. \n \n \n","PeriodicalId":502666,"journal":{"name":"Filosofija. Sociologija","volume":"25 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?\",\"authors\":\"Robert James M. Boyles\",\"doi\":\"10.6001/fil-soc.2024.35.1.3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n \\n \\nThis article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the foundational level, their moral reasoning capabilities are prone to the no-ought-from-is thesis. The main hypothesis of the present work is that, by conceptually analysing the notion of bottom-up artificial moral agents, it would be revealed that their seeming moral judgements do not have proper philosophical basis. For one, the said kinds of artifacts arrive at the understanding of ethically-relevant ideas by means of culling data or facts from the environment. Thus, in relation to the is-ought problem, it may be argued that, even if bottom-up systems seem prima facie capable of generating apparent moral judgments, such are actually absent of good moral grounding, if not empty of any ethical value. \\n \\n \\n\",\"PeriodicalId\":502666,\"journal\":{\"name\":\"Filosofija. Sociologija\",\"volume\":\"25 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-02-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Filosofija. Sociologija\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.6001/fil-soc.2024.35.1.3\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Filosofija. Sociologija","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.6001/fil-soc.2024.35.1.3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

本文特别从大卫-休谟(David Hume)的 "是-应"(is-ought)问题出发,探讨自下而上的人造道德主体是否能够做出真正的道德判断。后者强调了一个概念,即评价性论断永远无法从纯粹的事实命题中推导出来。另一方面,自下而上的技术是指那些通过进化、发展或学习技术设计出来的技术。本文对这些系统的性质进行了研究,旨在初步评估是否有充分理由怀疑,在基础层面上,这些系统的道德推理能力容易受到 "无本之木 "论的影响。本研究的主要假设是,通过从概念上分析自下而上的人工道德代理这一概念,可以揭示出它们看似道德的判断并没有适当的哲学基础。其一,这些人工制品通过从环境中获取数据或事实来理解与道德相关的观念。因此,就 "是--求 "问题而言,我们可以说,即使自下而上的系统表面上看起来能够产生表面上的道德判断,但实际上却缺乏良好的道德基础,甚至没有任何道德价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Can’t Bottom-up Artificial Moral Agents Make Moral Judgements?
This article examines if bottom-up artificial moral agents are capable of making genuine moral judgements, specifically in light of David Hume’s is-ought problem. The latter underscores the notion that evaluative assertions could never be derived from purely factual propositions. Bottom-up technologies, on the other hand, are those designed via evolutionary, developmental, or learning techniques. In this paper, the nature of these systems is looked into with the aim of preliminarily assessing if there are good reasons to suspect that, on the foundational level, their moral reasoning capabilities are prone to the no-ought-from-is thesis. The main hypothesis of the present work is that, by conceptually analysing the notion of bottom-up artificial moral agents, it would be revealed that their seeming moral judgements do not have proper philosophical basis. For one, the said kinds of artifacts arrive at the understanding of ethically-relevant ideas by means of culling data or facts from the environment. Thus, in relation to the is-ought problem, it may be argued that, even if bottom-up systems seem prima facie capable of generating apparent moral judgments, such are actually absent of good moral grounding, if not empty of any ethical value.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信