Automated pallet handling via occlusion-robust recognition learned from synthetic data*

Csaba Beleznai, Marcel Zeilinger, Johannes Huemer, Wolfgang Pointner, Sebastian Wimmer, P. Zips
{"title":"Automated pallet handling via occlusion-robust recognition learned from synthetic data*","authors":"Csaba Beleznai, Marcel Zeilinger, Johannes Huemer, Wolfgang Pointner, Sebastian Wimmer, P. Zips","doi":"10.1109/CAI54212.2023.00039","DOIUrl":null,"url":null,"abstract":"Vision-based perception is a key enabling technology when attempting to convert human work processes into automated robotic workflows in diverse production and transport scenarios. Automation of such workflows, however, faces several challenges due to the diversity governing these scenarios: various objects to be handled, differing viewing conditions, partial visibility and occlusions. In this paper we describe the concept of an occlusion-robust pallet recognition methodology trained fully in the synthetic domain and well coping with varying object appearance. A key factor in our representation learning scheme is to entirely focus on geometric traits, captured by the surface normals of dense stereo depth data. Furthermore, we adopt a local key-point detection scheme with regressed attributes allowing for a bottom-up voting step for object candidates. The proposed geometric focus combined with local key-point based reasoning yields an appearance-independent (color, texture, material, illumination) and occlusion-robust detection scheme. A quantitative evaluation of recognition accuracy for two network architectures is performed using a manually fine-annotated multi-warehouse dataset. Given the standardized pallet dimensions, spatially accurate pose estimation and tracking, and robotic path planning are carried out and demonstrated in two automated forklift demonstrators. These demonstrators exhibit the ability to consistently perform automated pick-up and drop-off of pallets carrying arbitrary items, under a wide variation of settings.","PeriodicalId":129324,"journal":{"name":"2023 IEEE Conference on Artificial Intelligence (CAI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Conference on Artificial Intelligence (CAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CAI54212.2023.00039","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Vision-based perception is a key enabling technology when attempting to convert human work processes into automated robotic workflows in diverse production and transport scenarios. Automation of such workflows, however, faces several challenges due to the diversity governing these scenarios: various objects to be handled, differing viewing conditions, partial visibility and occlusions. In this paper we describe the concept of an occlusion-robust pallet recognition methodology trained fully in the synthetic domain and well coping with varying object appearance. A key factor in our representation learning scheme is to entirely focus on geometric traits, captured by the surface normals of dense stereo depth data. Furthermore, we adopt a local key-point detection scheme with regressed attributes allowing for a bottom-up voting step for object candidates. The proposed geometric focus combined with local key-point based reasoning yields an appearance-independent (color, texture, material, illumination) and occlusion-robust detection scheme. A quantitative evaluation of recognition accuracy for two network architectures is performed using a manually fine-annotated multi-warehouse dataset. Given the standardized pallet dimensions, spatially accurate pose estimation and tracking, and robotic path planning are carried out and demonstrated in two automated forklift demonstrators. These demonstrators exhibit the ability to consistently perform automated pick-up and drop-off of pallets carrying arbitrary items, under a wide variation of settings.
通过从合成数据中学习的闭塞鲁棒识别自动托盘处理*
在不同的生产和运输场景中,当试图将人类工作流程转换为自动化机器人工作流程时,基于视觉的感知是一项关键的使能技术。然而,由于控制这些场景的多样性,这些工作流程的自动化面临着一些挑战:要处理的各种对象,不同的观看条件,部分可见性和遮挡。在本文中,我们描述了一种遮挡鲁棒托盘识别方法的概念,该方法在合成领域得到了充分的训练,并能很好地应对不同的物体外观。我们的表示学习方案的一个关键因素是完全关注几何特征,由密集立体深度数据的表面法线捕获。此外,我们采用了一种具有回归属性的局部关键点检测方案,允许对候选对象进行自下而上的投票步骤。所提出的几何焦点与基于局部关键点的推理相结合,产生了一种与外观无关(颜色、纹理、材料、照明)和遮挡鲁棒的检测方案。使用手动精细标注的多仓库数据集对两种网络架构的识别精度进行了定量评估。给定标准化的托盘尺寸,空间精确的姿态估计和跟踪以及机器人路径规划在两个自动化叉车演示中进行并演示。这些演示展示了在各种设置下始终如一地执行自动拾取和丢弃携带任意物品的托盘的能力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信