告诉自主系统该做什么

P. Werkhoven, L. Kester, Mark Antonius Neerincx
{"title":"告诉自主系统该做什么","authors":"P. Werkhoven, L. Kester, Mark Antonius Neerincx","doi":"10.1145/3232078.3232238","DOIUrl":null,"url":null,"abstract":"Recent progress in Artificial Intelligence, sensing and network technology, robotics, and (cloud) computing has enabled the development of intelligent autonomous machine systems. Telling such autonomous systems \"what to do\" in a responsible way, is a non-trivial task. For intelligent autonomous machines to function in human society and collaborate with humans, we see three challenges ahead affecting meaningful control of autonomous systems. First, autonomous machines are not yet capable of handling failures and unexpected situations. Providing procedures for all possible failures and situations is unfeasible because the state-action space would explode. Machines should therefore become self-aware (self-assessment, self-management) enabling them to handle unexpected situations when they arise. This is a challenge for the computer science community. Second, in order to keep (meaningful) control, humans come into a new role of providing intelligent autonomous machines with objectives or goal functions (including rules, norms, constraints and moral values), specifying the utility of every possible outcome of actions of autonomous machines. Third, in order to be able to collaborate with humans, autonomous systems will require an understanding of (us) humans (i.e., our social, cognitive, affective and physical behaviors) and the ability to engage in partnership interactions (such as explanations of task performances, and the establishment of joint goals and work agreements). These are new challenges for the cognitive ergonomics community.","PeriodicalId":263115,"journal":{"name":"Proceedings of the 36th European Conference on Cognitive Ergonomics","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"19","resultStr":"{\"title\":\"Telling autonomous systems what to do\",\"authors\":\"P. Werkhoven, L. Kester, Mark Antonius Neerincx\",\"doi\":\"10.1145/3232078.3232238\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recent progress in Artificial Intelligence, sensing and network technology, robotics, and (cloud) computing has enabled the development of intelligent autonomous machine systems. Telling such autonomous systems \\\"what to do\\\" in a responsible way, is a non-trivial task. For intelligent autonomous machines to function in human society and collaborate with humans, we see three challenges ahead affecting meaningful control of autonomous systems. First, autonomous machines are not yet capable of handling failures and unexpected situations. Providing procedures for all possible failures and situations is unfeasible because the state-action space would explode. Machines should therefore become self-aware (self-assessment, self-management) enabling them to handle unexpected situations when they arise. This is a challenge for the computer science community. Second, in order to keep (meaningful) control, humans come into a new role of providing intelligent autonomous machines with objectives or goal functions (including rules, norms, constraints and moral values), specifying the utility of every possible outcome of actions of autonomous machines. Third, in order to be able to collaborate with humans, autonomous systems will require an understanding of (us) humans (i.e., our social, cognitive, affective and physical behaviors) and the ability to engage in partnership interactions (such as explanations of task performances, and the establishment of joint goals and work agreements). These are new challenges for the cognitive ergonomics community.\",\"PeriodicalId\":263115,\"journal\":{\"name\":\"Proceedings of the 36th European Conference on Cognitive Ergonomics\",\"volume\":\"32 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-09-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"19\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 36th European Conference on Cognitive Ergonomics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3232078.3232238\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 36th European Conference on Cognitive Ergonomics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3232078.3232238","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 19

摘要

人工智能、传感和网络技术、机器人技术和(云)计算的最新进展使智能自主机器系统的发展成为可能。以负责任的方式告诉这些自主系统“该做什么”,是一项非同小可的任务。为了让智能自主机器在人类社会中发挥作用并与人类合作,我们看到了影响自主系统有意义控制的三大挑战。首先,自主机器还没有能力处理故障和意外情况。为所有可能的失败和情况提供程序是不可行的,因为状态-行动空间会爆炸。因此,机器应该具有自我意识(自我评估、自我管理),使它们能够在出现意外情况时处理它们。这对计算机科学界来说是一个挑战。其次,为了保持(有意义的)控制,人类开始扮演一个新的角色,为智能自主机器提供目标或目标函数(包括规则、规范、约束和道德价值观),指定自主机器行动的每一个可能结果的效用。第三,为了能够与人类合作,自主系统将需要理解(我们)人类(即,我们的社会,认知,情感和身体行为)和参与伙伴关系互动的能力(如解释任务表现,建立共同目标和工作协议)。这些都是认知工效学领域面临的新挑战。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Telling autonomous systems what to do
Recent progress in Artificial Intelligence, sensing and network technology, robotics, and (cloud) computing has enabled the development of intelligent autonomous machine systems. Telling such autonomous systems "what to do" in a responsible way, is a non-trivial task. For intelligent autonomous machines to function in human society and collaborate with humans, we see three challenges ahead affecting meaningful control of autonomous systems. First, autonomous machines are not yet capable of handling failures and unexpected situations. Providing procedures for all possible failures and situations is unfeasible because the state-action space would explode. Machines should therefore become self-aware (self-assessment, self-management) enabling them to handle unexpected situations when they arise. This is a challenge for the computer science community. Second, in order to keep (meaningful) control, humans come into a new role of providing intelligent autonomous machines with objectives or goal functions (including rules, norms, constraints and moral values), specifying the utility of every possible outcome of actions of autonomous machines. Third, in order to be able to collaborate with humans, autonomous systems will require an understanding of (us) humans (i.e., our social, cognitive, affective and physical behaviors) and the ability to engage in partnership interactions (such as explanations of task performances, and the establishment of joint goals and work agreements). These are new challenges for the cognitive ergonomics community.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信