Using spatial constraints for fast set-up of precise pose estimation in an industrial setting

Frederik Hagelskjær, T. Savarimuthu, N. Krüger, A. Buch
{"title":"Using spatial constraints for fast set-up of precise pose estimation in an industrial setting","authors":"Frederik Hagelskjær, T. Savarimuthu, N. Krüger, A. Buch","doi":"10.1109/COASE.2019.8842876","DOIUrl":null,"url":null,"abstract":"This paper presents a method for high precision visual pose estimation along with a simple setup procedure. Robotics for industrial solutions is a rapidly growing field and these robots require very precise position information to perform manipulations. This is usually accomplished using e.g. fixtures or feeders, both expensive hardware solutions. To enable fast changes in production, more flexible solutions are required, one possibility being visual pose estimation. Although many current pose estimation algorithms show increased performance in terms of recognition rates on public datasets, they do not focus on actual applications, neither in setup complexity or high accuracy during object localization. In contrast, our method focuses on solving a number of specific pose estimation problems in a seamless manner with a simple setup procedure. Our method relies on a number of workcell constraints and employs a novel method for automatically finding stable object poses. In addition, we use an active rendering method for refining the estimated object poses, giving a very fine localization, suitable for robotic manipulation. Experiments with current state-of-the-art 2D algorithms and our method show an average improvement from 9 mm to 0.95 mm uncertainty. The method was also used by the winning team at the 2018 World Robot Summit Assembly Challenge.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"144 1","pages":"1308-1314"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2019.8842876","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 10

Abstract

This paper presents a method for high precision visual pose estimation along with a simple setup procedure. Robotics for industrial solutions is a rapidly growing field and these robots require very precise position information to perform manipulations. This is usually accomplished using e.g. fixtures or feeders, both expensive hardware solutions. To enable fast changes in production, more flexible solutions are required, one possibility being visual pose estimation. Although many current pose estimation algorithms show increased performance in terms of recognition rates on public datasets, they do not focus on actual applications, neither in setup complexity or high accuracy during object localization. In contrast, our method focuses on solving a number of specific pose estimation problems in a seamless manner with a simple setup procedure. Our method relies on a number of workcell constraints and employs a novel method for automatically finding stable object poses. In addition, we use an active rendering method for refining the estimated object poses, giving a very fine localization, suitable for robotic manipulation. Experiments with current state-of-the-art 2D algorithms and our method show an average improvement from 9 mm to 0.95 mm uncertainty. The method was also used by the winning team at the 2018 World Robot Summit Assembly Challenge.
利用空间约束在工业环境中快速建立精确的姿态估计
本文提出了一种高精度的视觉姿态估计方法,并给出了简单的设置步骤。用于工业解决方案的机器人技术是一个快速发展的领域,这些机器人需要非常精确的位置信息来执行操作。这通常是通过夹具或馈线来实现的,两者都是昂贵的硬件解决方案。为了实现生产中的快速变化,需要更灵活的解决方案,其中一种可能是视觉姿态估计。尽管目前许多姿态估计算法在公共数据集上的识别率方面表现出了提高的性能,但它们并没有关注实际应用,无论是在物体定位过程中的设置复杂性还是高精度方面。相比之下,我们的方法侧重于通过简单的设置过程以无缝的方式解决许多特定的姿态估计问题。我们的方法依赖于许多工作单元约束,并采用了一种自动寻找稳定物体姿态的新方法。此外,我们使用一种主动渲染方法来精炼估计的物体姿态,给出非常精细的定位,适合机器人操作。用当前最先进的二维算法和我们的方法进行的实验显示,不确定度从9毫米平均提高到0.95毫米。在2018年世界机器人峰会组装挑战赛中获胜的团队也使用了这种方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信