Mobile Manipulation Based on Generic Object Knowledge

F. Bley, Volker Schmirgel, K. Kraiss
{"title":"Mobile Manipulation Based on Generic Object Knowledge","authors":"F. Bley, Volker Schmirgel, K. Kraiss","doi":"10.1109/ROMAN.2006.314363","DOIUrl":null,"url":null,"abstract":"The control of vision-based mobile manipulators relies on specific information representing the object which has to be manipulated. Conventional methods like position-based or image-based visual servoing use a precise 3D model of the object or image features to move the end effector to the desired position. This a-priori knowledge has to be generated by an expert before the manipulation can be performed and it is limited to a specific object. Similar items of the same category varying in size or color can not be handled with these approaches. We are proposing a new methodology, which uses a generic category description instead of specific object features in order to allow the interaction with a broader range of items. This description includes appearance properties represented through geometrical primitives as well as additional category knowledge like functional properties, mechanical attributes and directions for object handling. Since the online determination of possible grasps for multi-fingered grippers is time-consuming, suitable grasp modes are stored for every object category, including preferred areas of contact on the objects surface","PeriodicalId":254129,"journal":{"name":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","volume":"496 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2006-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"23","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ROMAN 2006 - The 15th IEEE International Symposium on Robot and Human Interactive Communication","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ROMAN.2006.314363","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 23

Abstract

The control of vision-based mobile manipulators relies on specific information representing the object which has to be manipulated. Conventional methods like position-based or image-based visual servoing use a precise 3D model of the object or image features to move the end effector to the desired position. This a-priori knowledge has to be generated by an expert before the manipulation can be performed and it is limited to a specific object. Similar items of the same category varying in size or color can not be handled with these approaches. We are proposing a new methodology, which uses a generic category description instead of specific object features in order to allow the interaction with a broader range of items. This description includes appearance properties represented through geometrical primitives as well as additional category knowledge like functional properties, mechanical attributes and directions for object handling. Since the online determination of possible grasps for multi-fingered grippers is time-consuming, suitable grasp modes are stored for every object category, including preferred areas of contact on the objects surface
基于通用对象知识的移动操作
基于视觉的移动机械臂的控制依赖于代表被操纵对象的特定信息。传统的方法,如基于位置或基于图像的视觉伺服,使用物体或图像特征的精确3D模型,将末端执行器移动到所需的位置。这种先验知识必须由专家在操作之前生成,并且仅限于特定对象。相同类别的类似物品,大小或颜色不同,不能用这些方法处理。我们正在提出一种新的方法,它使用通用的类别描述而不是特定的对象特征,以便允许与更广泛的项目进行交互。这种描述包括通过几何原语表示的外观属性,以及其他类别知识,如功能属性、机械属性和对象处理方向。由于在线确定多指抓取器的可能抓取是耗时的,因此为每个对象类别存储合适的抓取模式,包括对象表面上的首选接触区域
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信