微型机器人鲁棒视觉导引的扩展alvinn结构

M. Krabbes, H.-J. Bohme, V. Stephan, H. Groß
{"title":"微型机器人鲁棒视觉导引的扩展alvinn结构","authors":"M. Krabbes, H.-J. Bohme, V. Stephan, H. Groß","doi":"10.1109/EURBOT.1997.633545","DOIUrl":null,"url":null,"abstract":"Extensions of the ALVINN architecture are introduced for a KHEPERA miniature robot to navigate visually robust in a labyrinth. The reimplementation of the ALVINN-approach demonstrates, that also in indoor-environments a complex visual robot navigation is achievable using a direct input-output-mapping with a multilayer perceptron network, which is trained by expert-cloning. With the extensions it succeeds to overcome the restrictions of the small visual field of the camera by completing the input vector with history-components, introduction of the velocity dimension and evaluation of the network's output by a dynamic neural field. This creates the prerequisites to take turns which are no longer visible in the actual image and so make use of several alternatives of actions.","PeriodicalId":129683,"journal":{"name":"Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1997-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Extension of the ALVINN-architecture for robust visual guidance of a miniature robot\",\"authors\":\"M. Krabbes, H.-J. Bohme, V. Stephan, H. Groß\",\"doi\":\"10.1109/EURBOT.1997.633545\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Extensions of the ALVINN architecture are introduced for a KHEPERA miniature robot to navigate visually robust in a labyrinth. The reimplementation of the ALVINN-approach demonstrates, that also in indoor-environments a complex visual robot navigation is achievable using a direct input-output-mapping with a multilayer perceptron network, which is trained by expert-cloning. With the extensions it succeeds to overcome the restrictions of the small visual field of the camera by completing the input vector with history-components, introduction of the velocity dimension and evaluation of the network's output by a dynamic neural field. This creates the prerequisites to take turns which are no longer visible in the actual image and so make use of several alternatives of actions.\",\"PeriodicalId\":129683,\"journal\":{\"name\":\"Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots\",\"volume\":\"73 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1997-10-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EURBOT.1997.633545\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Second EUROMICRO Workshop on Advanced Mobile Robots","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EURBOT.1997.633545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

为KHEPERA微型机器人引入了ALVINN架构的扩展,以在迷宫中进行视觉导航。alvinn方法的重新实现表明,在室内环境中,使用直接输入-输出映射与多层感知器网络可以实现复杂的视觉机器人导航,该感知器网络通过专家克隆进行训练。通过扩展,利用历史分量完成输入向量,引入速度维数,用动态神经场评价网络的输出,成功地克服了摄像机视野小的限制。这创造了在实际图像中不再可见的轮流操作的先决条件,因此可以使用几个替代操作。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Extension of the ALVINN-architecture for robust visual guidance of a miniature robot
Extensions of the ALVINN architecture are introduced for a KHEPERA miniature robot to navigate visually robust in a labyrinth. The reimplementation of the ALVINN-approach demonstrates, that also in indoor-environments a complex visual robot navigation is achievable using a direct input-output-mapping with a multilayer perceptron network, which is trained by expert-cloning. With the extensions it succeeds to overcome the restrictions of the small visual field of the camera by completing the input vector with history-components, introduction of the velocity dimension and evaluation of the network's output by a dynamic neural field. This creates the prerequisites to take turns which are no longer visible in the actual image and so make use of several alternatives of actions.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信