Integrating object and grasp recognition for dynamic scene interpretation

S. Ekvall, D. Kragic
{"title":"Integrating object and grasp recognition for dynamic scene interpretation","authors":"S. Ekvall, D. Kragic","doi":"10.1109/ICAR.2005.1507432","DOIUrl":null,"url":null,"abstract":"Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper, we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, programming by demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it","PeriodicalId":428475,"journal":{"name":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ICAR '05. Proceedings., 12th International Conference on Advanced Robotics, 2005.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAR.2005.1507432","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 26

Abstract

Understanding and interpreting dynamic scenes and activities is a very challenging problem. In this paper, we present a system capable of learning robot tasks from demonstration. Classical robot task programming requires an experienced programmer and a lot of tedious work. In contrast, programming by demonstration is a flexible framework that reduces the complexity of programming robot tasks, and allows end-users to demonstrate the tasks instead of writing code. We present our recent steps towards this goal. A system for learning pick-and-place tasks by manually demonstrating them is presented. Each demonstrated task is described by an abstract model involving a set of simple tasks such as what object is moved, where it is moved, and which grasp type was used to move it
结合物体与抓握识别实现动态场景判读
理解和解释动态场景和活动是一个非常具有挑战性的问题。在本文中,我们提出了一个能够从演示中学习机器人任务的系统。经典的机器人任务编程需要有经验的程序员和大量繁琐的工作。相比之下,演示编程是一个灵活的框架,它减少了编程机器人任务的复杂性,并允许最终用户演示任务,而不是编写代码。我们介绍我们最近为实现这一目标所采取的步骤。提出了一种通过人工演示来学习拾取任务的系统。每个演示的任务都由一个抽象模型来描述,该模型涉及一组简单的任务,例如移动什么对象,移动到哪里,以及使用哪种抓取类型来移动它
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信