Ali Momen, E. D. Visser, Kyle Wolsten, Katrina Cooley, James C. Walliser, Chad C. Tossell
{"title":"Trusting the Moral Judgments of a Robot: Perceived Moral Competence and Humanlikeness of a GPT-3 Enabled AI","authors":"Ali Momen, E. D. Visser, Kyle Wolsten, Katrina Cooley, James C. Walliser, Chad C. Tossell","doi":"10.21428/cb6ab371.755e9cb7","DOIUrl":null,"url":null,"abstract":"Advancements in computing power and foundational modeling have enabled artificial intelligence (AI) to respond to moral queries with surprising accuracy. This raises the question of whether we trust AI to influence human moral decision-making, so far, a uniquely human activity. We explored how a machine agent trained to respond to moral queries (Delphi, Jiang et al., 2021) is perceived by human questioners. Participants were tasked with querying the agent with the goal of figuring out whether the agent, presented as a humanlike robot or a web client, was morally competent and could be trusted. Participants rated the moral competence and perceived morality of both agents as high yet found it lacking because it could not provide justifications for its moral judgments. While both agents were also rated highly on trustworthiness, participants had little intention to rely on such an agent in the future. This work presents an important first evaluation of a morally competent algorithm integrated with a human-like platform that could advance the development of moral robot advisors.","PeriodicalId":74512,"journal":{"name":"Proceedings of the ... Annual Hawaii International Conference on System Sciences. Annual Hawaii International Conference on System Sciences","volume":"13 1","pages":"501-510"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... Annual Hawaii International Conference on System Sciences. Annual Hawaii International Conference on System Sciences","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21428/cb6ab371.755e9cb7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Advancements in computing power and foundational modeling have enabled artificial intelligence (AI) to respond to moral queries with surprising accuracy. This raises the question of whether we trust AI to influence human moral decision-making, so far, a uniquely human activity. We explored how a machine agent trained to respond to moral queries (Delphi, Jiang et al., 2021) is perceived by human questioners. Participants were tasked with querying the agent with the goal of figuring out whether the agent, presented as a humanlike robot or a web client, was morally competent and could be trusted. Participants rated the moral competence and perceived morality of both agents as high yet found it lacking because it could not provide justifications for its moral judgments. While both agents were also rated highly on trustworthiness, participants had little intention to rely on such an agent in the future. This work presents an important first evaluation of a morally competent algorithm integrated with a human-like platform that could advance the development of moral robot advisors.
计算能力和基础建模的进步使人工智能(AI)能够以惊人的准确性回应道德问题。这就提出了一个问题:我们是否相信人工智能会影响人类的道德决策,到目前为止,这是一种独特的人类活动。我们探索了如何训练机器代理来响应道德问题(Delphi, Jiang et al., 2021)被人类提问者感知。参与者的任务是询问代理,目的是弄清楚代理(以人形机器人或网络客户端的形式呈现)是否有道德能力,是否可以信任。参与者对两种行为者的道德能力和感知道德的评价都很高,但却发现存在不足,因为它无法为其道德判断提供理由。虽然这两个代理的可信度也很高,但参与者在未来几乎不打算依赖这样的代理。这项工作提出了一个重要的道德胜任算法的首次评估,该算法与一个类似人类的平台相结合,可以推动道德机器人顾问的发展。