The Problem Of Moral Agency In Artificial Intelligence

Riya Manna, Rajakishore Nath
{"title":"The Problem Of Moral Agency In Artificial Intelligence","authors":"Riya Manna, Rajakishore Nath","doi":"10.1109/21CW48944.2021.9532549","DOIUrl":null,"url":null,"abstract":"Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named ‘augmented intelligence’. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged ‘Cybernetics’ with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an ‘agency’ in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over ‘the problem of ethical agency in AI’. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate ‘the problem of moral agency in AI’ from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.","PeriodicalId":239334,"journal":{"name":"2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW)","volume":"72 5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Conference on Norbert Wiener in the 21st Century (21CW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/21CW48944.2021.9532549","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3

Abstract

Humans have invented intelligent machinery to enhance their rational decision-making procedure, which is why it has been named ‘augmented intelligence’. The usage of artificial intelligence (AI) technology is increasing enormously with every passing year, and it is becoming a part of our daily life. We are using this technology not only as a tool to enhance our rationality but also heightening them as the autonomous ethical agent for our future society. Norbert Wiener envisaged ‘Cybernetics’ with a view of a brain-machine interface to augment human beings' biological rationality. Being an autonomous ethical agent presupposes an ‘agency’ in moral decision-making procedure. According to agency's contemporary theories, AI robots might be entitled to some minimal rational agency. However, that minimal agency might not be adequate for a fully autonomous ethical agent's performance in the future. If we plan to implement them as an ethical agent for the future society, it will be difficult for us to judge their actual stand as a moral agent. It is well known that any kind of moral agency presupposes consciousness and mental representations, which cannot be articulated synthetically until today. We can only anticipate that this milestone will be achieved by AI scientists shortly, which will further help them triumph over ‘the problem of ethical agency in AI’. Philosophers are currently trying a probe of the pre-existing ethical theories to build a guidance framework for the AI robots and construct a tangible overview of artificial moral agency. Although, no unanimous solution is available yet. It will land up in another conflicting situation between biological, moral agency and autonomous ethical agency, which will leave us in a baffled state. Creating rational and ethical AI machines will be a fundamental future research problem for the AI field. This paper aims to investigate ‘the problem of moral agency in AI’ from a philosophical outset and hold a survey of the relevant philosophical discussions to find a resolution for the same.
人工智能中的道德代理问题
人类已经发明了智能机器来增强他们的理性决策过程,这就是为什么它被称为“增强智能”。人工智能(AI)技术的使用逐年大幅增加,它正在成为我们日常生活的一部分。我们不仅把这项技术作为一种工具来提高我们的理性,而且还把它们作为我们未来社会的自主道德代理人。诺伯特·维纳(Norbert Wiener)设想的“控制论”以脑机接口的观点来增强人类的生物理性。作为一个自主的伦理主体,在道德决策过程中必须有一个“代理”。根据代理的当代理论,人工智能机器人可能有权拥有某种最低限度的理性代理。然而,这种最小的代理可能不足以满足未来完全自主的道德代理的表现。如果我们计划将他们作为未来社会的道德代理人来实施,我们将很难判断他们作为道德代理人的实际立场。众所周知,任何一种道德行为都以意识和心理表征为前提,而这些直到今天还不能被综合地表达出来。我们只能预计,人工智能科学家很快就会实现这一里程碑,这将进一步帮助他们战胜“人工智能中的道德代理问题”。目前,哲学家们正在尝试对已有的伦理理论进行探索,为人工智能机器人构建一个指导框架,并构建一个有形的人工道德主体概述。不过,目前还没有一致的解决方案。它将陷入另一种生物的、道德的能动性和自主的伦理能动性之间的冲突,这将使我们陷入困惑的状态。创造理性和道德的人工智能机器将是人工智能领域未来的一个基本研究问题。本文旨在从哲学的角度探讨“人工智能中的道德代理问题”,并对相关的哲学讨论进行综述,以寻求解决这一问题的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信