解决多智能体环境中伦理问题的自动争论裁决

S. Bringsjord, Naveen Sundar Govindarajulu, Michael Giancola
{"title":"解决多智能体环境中伦理问题的自动争论裁决","authors":"S. Bringsjord, Naveen Sundar Govindarajulu, Michael Giancola","doi":"10.1515/pjbr-2021-0009","DOIUrl":null,"url":null,"abstract":"Abstract Suppose an artificial agent a adj {a}_{\\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \\phi holds, then from a 2 {a}_{2} that ¬ ϕ \\neg \\phi holds, and then from a 3 {a}_{3} that neither ϕ \\phi nor ¬ ϕ \\neg \\phi should be believed, but rather ψ \\psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.","PeriodicalId":90037,"journal":{"name":"Paladyn : journal of behavioral robotics","volume":"100 1","pages":"310 - 335"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Automated argument adjudication to solve ethical problems in multi-agent environments\",\"authors\":\"S. Bringsjord, Naveen Sundar Govindarajulu, Michael Giancola\",\"doi\":\"10.1515/pjbr-2021-0009\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract Suppose an artificial agent a adj {a}_{\\\\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\\\\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\\\\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\\\\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \\\\phi holds, then from a 2 {a}_{2} that ¬ ϕ \\\\neg \\\\phi holds, and then from a 3 {a}_{3} that neither ϕ \\\\phi nor ¬ ϕ \\\\neg \\\\phi should be believed, but rather ψ \\\\psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.\",\"PeriodicalId\":90037,\"journal\":{\"name\":\"Paladyn : journal of behavioral robotics\",\"volume\":\"100 1\",\"pages\":\"310 - 335\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Paladyn : journal of behavioral robotics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/pjbr-2021-0009\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Paladyn : journal of behavioral robotics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/pjbr-2021-0009","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

摘要

摘要:假设一个人工智能体(adj {a_}{\text{adj}}),随着时间的推移,(i)从多个人工智能体(这些人工智能体本身也可能从其他人工智能体…)那里接收到命题内容,(ii)必须在接收到的内容的基础上解决一个伦理问题。一个形容词_ {}{\text{adj}}应该如何判断它收到了什么,才能产生这样的解决方案?我们考虑一个充满逻辑学人工智能体a1, a2,…,a_1,{a_2}, {}{}{}\ldots,a_n的环境,这些智能体感知{并}将它们的发现报告给必须解决道德问题的“裁判”智能体。(如果不是大多数的话,其中许多可能是机器人。)在这样的环境中,不一致性是一个虚拟的保证:例如,一个adj a_ {}{}{\text{adj}}可以从一个命题φ {}{}\phi持有的1 a_1收到一个报告,然后从一个命题φ {}{}\neg\phi持有的2 a_2收到一个报告,然后从一个φ {}{}\phi和φ \neg\phi都不应该被相信的3 a_3收到一个报告,而是在某种程度上的可能性ψ \psi。我们进一步假设,收到这种不相容报告的代理有时只是需要在这些报告的基础上做出决定,以试图解决道德问题。我们为这种困境提供了一个解决方案:人工智能能够根据时间的推移判断来自附属代理的竞争性报告,并根据这种判断向人类提供理性、道德正确(相对于潜在的道德原则)的建议。为了阐明我们的解决方案,我们将其固定到一个特定的场景。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Automated argument adjudication to solve ethical problems in multi-agent environments
Abstract Suppose an artificial agent a adj {a}_{\text{adj}} , as time unfolds, (i) receives from multiple artificial agents (which may, in turn, themselves have received from yet other such agents…) propositional content, and (ii) must solve an ethical problem on the basis of what it has received. How should a adj {a}_{\text{adj}} adjudicate what it has received in order to produce such a solution? We consider an environment infused with logicist artificial agents a 1 , a 2 , … , a n {a}_{1},{a}_{2},\ldots ,{a}_{n} that sense and report their findings to “adjudicator” agents who must solve ethical problems. (Many if not most of these agents may be robots.) In such an environment, inconsistency is a virtual guarantee: a adj {a}_{\text{adj}} may, for instance, receive a report from a 1 {a}_{1} that proposition ϕ \phi holds, then from a 2 {a}_{2} that ¬ ϕ \neg \phi holds, and then from a 3 {a}_{3} that neither ϕ \phi nor ¬ ϕ \neg \phi should be believed, but rather ψ \psi instead, at some level of likelihood. We further assume that agents receiving such incompatible reports will nonetheless sometimes simply need, before long, to make decisions on the basis of these reports, in order to try to solve ethical problems. We provide a solution to such a quandary: AI capable of adjudicating competing reports from subsidiary agents through time, and delivering to humans a rational, ethically correct (relative to underlying ethical principles) recommendation based upon such adjudication. To illuminate our solution, we anchor it to a particular scenario.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信