更快和更精细的姿态估计对象池在一个单一的RGB图像

Lee Aing, W. Lie, J. Chiang
{"title":"更快和更精细的姿态估计对象池在一个单一的RGB图像","authors":"Lee Aing, W. Lie, J. Chiang","doi":"10.1109/VCIP53242.2021.9675316","DOIUrl":null,"url":null,"abstract":"Predicting/estimating the 6DoF pose parameters for multi-instance objects accurately in a fast manner is an important issue in robotic and computer vision. Even though some bottom-up methods have been proposed to be able to estimate multiple instance poses simultaneously, their accuracy cannot be considered as good enough when compared to other state-of-the-art top-down methods. Their processing speed still cannot respond to practical applications. In this paper, we present a faster and finer bottom-up approach of deep convolutional neural network to estimate poses of the object pool even multiple instances of the same object category present high occlusion/overlapping. Several techniques such as prediction of semantic segmentation map, multiple keypoint vector field, and 3D coordinate map, and diagonal graph clustering are proposed and combined to achieve the purpose. Experimental results and ablation studies show that the proposed system can achieve comparable accuracy at a speed of 24.7 frames per second for up to 7 objects by evaluation on the well-known Occlusion LINEMOD dataset.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Faster and Finer Pose Estimation for Object Pool in a Single RGB Image\",\"authors\":\"Lee Aing, W. Lie, J. Chiang\",\"doi\":\"10.1109/VCIP53242.2021.9675316\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Predicting/estimating the 6DoF pose parameters for multi-instance objects accurately in a fast manner is an important issue in robotic and computer vision. Even though some bottom-up methods have been proposed to be able to estimate multiple instance poses simultaneously, their accuracy cannot be considered as good enough when compared to other state-of-the-art top-down methods. Their processing speed still cannot respond to practical applications. In this paper, we present a faster and finer bottom-up approach of deep convolutional neural network to estimate poses of the object pool even multiple instances of the same object category present high occlusion/overlapping. Several techniques such as prediction of semantic segmentation map, multiple keypoint vector field, and 3D coordinate map, and diagonal graph clustering are proposed and combined to achieve the purpose. Experimental results and ablation studies show that the proposed system can achieve comparable accuracy at a speed of 24.7 frames per second for up to 7 objects by evaluation on the well-known Occlusion LINEMOD dataset.\",\"PeriodicalId\":114062,\"journal\":{\"name\":\"2021 International Conference on Visual Communications and Image Processing (VCIP)\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 International Conference on Visual Communications and Image Processing (VCIP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/VCIP53242.2021.9675316\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Visual Communications and Image Processing (VCIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/VCIP53242.2021.9675316","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

快速准确地预测/估计多实例对象的6DoF位姿参数是机器人视觉和计算机视觉中的一个重要问题。尽管已经提出了一些自下而上的方法来同时估计多个实例姿态,但与其他最先进的自上而下的方法相比,它们的准确性还不够好。它们的处理速度仍然不能响应实际应用。在本文中,我们提出了一种更快,更精细的深度卷积神经网络自下而上的方法来估计目标池的姿态,即使同一目标类别的多个实例存在高遮挡/重叠。提出了语义分割图预测、多关键点向量场预测、三维坐标图预测、对角线图聚类等技术并进行了组合。实验结果和实验研究表明,通过对著名的Occlusion LINEMOD数据集的评估,该系统可以在24.7帧/秒的速度下达到最多7个目标的相当精度。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Faster and Finer Pose Estimation for Object Pool in a Single RGB Image
Predicting/estimating the 6DoF pose parameters for multi-instance objects accurately in a fast manner is an important issue in robotic and computer vision. Even though some bottom-up methods have been proposed to be able to estimate multiple instance poses simultaneously, their accuracy cannot be considered as good enough when compared to other state-of-the-art top-down methods. Their processing speed still cannot respond to practical applications. In this paper, we present a faster and finer bottom-up approach of deep convolutional neural network to estimate poses of the object pool even multiple instances of the same object category present high occlusion/overlapping. Several techniques such as prediction of semantic segmentation map, multiple keypoint vector field, and 3D coordinate map, and diagonal graph clustering are proposed and combined to achieve the purpose. Experimental results and ablation studies show that the proposed system can achieve comparable accuracy at a speed of 24.7 frames per second for up to 7 objects by evaluation on the well-known Occlusion LINEMOD dataset.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信