多机器人场景下通过模仿提高移动机器人的自主性

W. Richert, Ulrich Scheller, M. Koch, B. Kleinjohann, Claudius Stern
{"title":"多机器人场景下通过模仿提高移动机器人的自主性","authors":"W. Richert, Ulrich Scheller, M. Koch, B. Kleinjohann, Claudius Stern","doi":"10.1109/ICAS.2009.21","DOIUrl":null,"url":null,"abstract":"Because imitation has shown to be able to drastically cut down the exploration space it has been embraced largely by the robotics community to speed up autonomous learning. This has been done mostly in fixed demonstrator-imitator relationships, where a predefined demonstrator is performing the same action over and over again until the robot has learned it sufficiently. In practical multi-robot scenarios, however, the imitation process should not interrupt the observed robot as it will affect its autonomy. Therefore the imitating robot often has only a limited amount of behaviors of the same type to learn from. As this usually does not provide enough information for learning a generalized version of observed low-level actions, it can help the observer to optimize its strategy based on the recognized actions. In this case, first of all the imitating robot has to interpret the observed behavior in terms of its own behavior knowledge. This means that low-level observations have to be segmented into episodes of apparently similar behavior. For these episodes corresponding skills in the observing robot's own behavior repertoire have to be found. And for the episodes sequence corresponding state changes in the observing robot's own strategy have to be determined. How the former can be done has been shown by the authors in a previous paper. In this paper we will describe how our approach for \\emph{Evolving Societies of Learning Autonomous Systems} (ESLAS) is extended to be capable of multi-robot imitation. We demonstrate how the recognition results can be used in a multi-robot scenario to decentrally align the robots' behaviors, to speed up the overall learning process, and thus increase the overall autonomy.","PeriodicalId":258907,"journal":{"name":"2009 Fifth International Conference on Autonomic and Autonomous Systems","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2009-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Increasing the Autonomy of Mobile Robots by Imitation in Multi-robot Scenarios\",\"authors\":\"W. Richert, Ulrich Scheller, M. Koch, B. Kleinjohann, Claudius Stern\",\"doi\":\"10.1109/ICAS.2009.21\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Because imitation has shown to be able to drastically cut down the exploration space it has been embraced largely by the robotics community to speed up autonomous learning. This has been done mostly in fixed demonstrator-imitator relationships, where a predefined demonstrator is performing the same action over and over again until the robot has learned it sufficiently. In practical multi-robot scenarios, however, the imitation process should not interrupt the observed robot as it will affect its autonomy. Therefore the imitating robot often has only a limited amount of behaviors of the same type to learn from. As this usually does not provide enough information for learning a generalized version of observed low-level actions, it can help the observer to optimize its strategy based on the recognized actions. In this case, first of all the imitating robot has to interpret the observed behavior in terms of its own behavior knowledge. This means that low-level observations have to be segmented into episodes of apparently similar behavior. For these episodes corresponding skills in the observing robot's own behavior repertoire have to be found. And for the episodes sequence corresponding state changes in the observing robot's own strategy have to be determined. How the former can be done has been shown by the authors in a previous paper. In this paper we will describe how our approach for \\\\emph{Evolving Societies of Learning Autonomous Systems} (ESLAS) is extended to be capable of multi-robot imitation. We demonstrate how the recognition results can be used in a multi-robot scenario to decentrally align the robots' behaviors, to speed up the overall learning process, and thus increase the overall autonomy.\",\"PeriodicalId\":258907,\"journal\":{\"name\":\"2009 Fifth International Conference on Autonomic and Autonomous Systems\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2009-04-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2009 Fifth International Conference on Autonomic and Autonomous Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICAS.2009.21\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2009 Fifth International Conference on Autonomic and Autonomous Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAS.2009.21","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

因为模仿已经被证明能够大大减少探索空间,它已经被机器人社区广泛接受,以加速自主学习。这主要是在固定的演示者-模仿者关系中完成的,其中预定义的演示者一遍又一遍地执行相同的动作,直到机器人充分掌握它。然而,在实际的多机器人场景中,模仿过程不应该打断被观察机器人,因为它会影响其自主性。因此,模仿机器人通常只有有限数量的同类行为可以学习。由于这通常不能提供足够的信息来学习观察到的低级动作的广义版本,它可以帮助观察者根据已识别的动作优化其策略。在这种情况下,首先,模仿机器人必须根据自己的行为知识来解释观察到的行为。这意味着低水平的观察必须被分割成明显相似行为的片段。对于这些情节,必须找到观察机器人自身行为的相应技能。对于事件序列,必须确定观察机器人自身策略中相应的状态变化。作者在之前的一篇论文中已经说明了前者是如何做到的。在本文中,我们将描述如何将我们的\emph{学习自治系统(ESLAS)的进化社会}方法扩展到能够进行多机器人模仿。我们演示了如何在多机器人场景中使用识别结果来分散对齐机器人的行为,加快整体学习过程,从而提高整体自主性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Increasing the Autonomy of Mobile Robots by Imitation in Multi-robot Scenarios
Because imitation has shown to be able to drastically cut down the exploration space it has been embraced largely by the robotics community to speed up autonomous learning. This has been done mostly in fixed demonstrator-imitator relationships, where a predefined demonstrator is performing the same action over and over again until the robot has learned it sufficiently. In practical multi-robot scenarios, however, the imitation process should not interrupt the observed robot as it will affect its autonomy. Therefore the imitating robot often has only a limited amount of behaviors of the same type to learn from. As this usually does not provide enough information for learning a generalized version of observed low-level actions, it can help the observer to optimize its strategy based on the recognized actions. In this case, first of all the imitating robot has to interpret the observed behavior in terms of its own behavior knowledge. This means that low-level observations have to be segmented into episodes of apparently similar behavior. For these episodes corresponding skills in the observing robot's own behavior repertoire have to be found. And for the episodes sequence corresponding state changes in the observing robot's own strategy have to be determined. How the former can be done has been shown by the authors in a previous paper. In this paper we will describe how our approach for \emph{Evolving Societies of Learning Autonomous Systems} (ESLAS) is extended to be capable of multi-robot imitation. We demonstrate how the recognition results can be used in a multi-robot scenario to decentrally align the robots' behaviors, to speed up the overall learning process, and thus increase the overall autonomy.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信