基于YOLO和CNN的单图像部分深度估计用于机械臂控制

Hirohisa Kato, F. Nagata, Yuito Murakami, Keita Koya
{"title":"基于YOLO和CNN的单图像部分深度估计用于机械臂控制","authors":"Hirohisa Kato, F. Nagata, Yuito Murakami, Keita Koya","doi":"10.1109/ICMA54519.2022.9856055","DOIUrl":null,"url":null,"abstract":"This paper presents experimental results of partial object detection using YOLO (You Only Look Once) and partial depth estimation using CNN (Convolutional Neural Network) for application to a robot arm control. In recent years, image recognition and its application automation, as an alternative to advanced work, are attracting attention in various fields. In order for robot arms used in factories to perform high-value-added and flexible work, it is necessary to control them by object detection and recognition by deep learning. In this study, the authors propose a new approach for estimating the depth of partial images using YOLO and uses it to control the robot arm. In the experiments, both ends of a pen detected by YOLO are used for the input to a CNN. The detected parts are saved to images with a size of about 60 × 60 pixels, and the depths are estimated by giving the cropped images to the CNN. A desktop-sized robot with 4DOFs can successfully pick the pen by referring the depths. The effectiveness of the proposed method is demonstrated through experiments.","PeriodicalId":120073,"journal":{"name":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Partial Depth Estimation with Single Image Using YOLO and CNN for Robot Arm Control\",\"authors\":\"Hirohisa Kato, F. Nagata, Yuito Murakami, Keita Koya\",\"doi\":\"10.1109/ICMA54519.2022.9856055\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents experimental results of partial object detection using YOLO (You Only Look Once) and partial depth estimation using CNN (Convolutional Neural Network) for application to a robot arm control. In recent years, image recognition and its application automation, as an alternative to advanced work, are attracting attention in various fields. In order for robot arms used in factories to perform high-value-added and flexible work, it is necessary to control them by object detection and recognition by deep learning. In this study, the authors propose a new approach for estimating the depth of partial images using YOLO and uses it to control the robot arm. In the experiments, both ends of a pen detected by YOLO are used for the input to a CNN. The detected parts are saved to images with a size of about 60 × 60 pixels, and the depths are estimated by giving the cropped images to the CNN. A desktop-sized robot with 4DOFs can successfully pick the pen by referring the depths. The effectiveness of the proposed method is demonstrated through experiments.\",\"PeriodicalId\":120073,\"journal\":{\"name\":\"2022 IEEE International Conference on Mechatronics and Automation (ICMA)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-07\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Mechatronics and Automation (ICMA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICMA54519.2022.9856055\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Mechatronics and Automation (ICMA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMA54519.2022.9856055","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

本文给出了基于YOLO (You Only Look Once)的部分目标检测和基于CNN(卷积神经网络)的部分深度估计在机器人手臂控制中的应用实验结果。近年来,图像识别及其应用自动化作为一种替代高级工作的方法,受到了各个领域的关注。为了使工厂中使用的机器人手臂能够完成高附加值和灵活的工作,需要通过物体检测和深度学习识别来控制它们。在本研究中,作者提出了一种利用YOLO估计局部图像深度的新方法,并将其用于机器人手臂的控制。在实验中,使用YOLO检测到的笔的两端作为CNN的输入。将检测到的部分保存为大约60 × 60像素的图像,并通过将裁剪后的图像提供给CNN来估计深度。一个桌面大小的4dof机器人可以通过参考深度成功地拾取笔。通过实验验证了该方法的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Partial Depth Estimation with Single Image Using YOLO and CNN for Robot Arm Control
This paper presents experimental results of partial object detection using YOLO (You Only Look Once) and partial depth estimation using CNN (Convolutional Neural Network) for application to a robot arm control. In recent years, image recognition and its application automation, as an alternative to advanced work, are attracting attention in various fields. In order for robot arms used in factories to perform high-value-added and flexible work, it is necessary to control them by object detection and recognition by deep learning. In this study, the authors propose a new approach for estimating the depth of partial images using YOLO and uses it to control the robot arm. In the experiments, both ends of a pen detected by YOLO are used for the input to a CNN. The detected parts are saved to images with a size of about 60 × 60 pixels, and the depths are estimated by giving the cropped images to the CNN. A desktop-sized robot with 4DOFs can successfully pick the pen by referring the depths. The effectiveness of the proposed method is demonstrated through experiments.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信