{"title":"Vision-based robotic grasping using faster R-CNN–GRCNN dual-layer detection mechanism","authors":"Jianguo Duan, Liwen Zhuang, Qinglei Zhang, Jiyun Qin, Ying Zhou","doi":"10.1177/09544054241249217","DOIUrl":null,"url":null,"abstract":"Visual grasping technology plays a crucial role in various robotic applications, such as industrial automation, warehousing, and logistics. However, current visual grasping methods face limitations when applied in industrial scenarios. Focusing solely on the workspace where the grasping target is located restricts the camera’s ability to provide additional environmental information. On the other hand, monitoring the entire working area introduces irrelevant data and hinders accurate grasping pose estimation. In this paper, we propose a novel approach that combines a global camera and a depth camera to enable efficient target grasping. Specifically, we introduce a dual-layer detection mechanism based on Faster R-CNN–GRCNN. By enhancing the Faster R-CNN with attention mechanisms, we focus the global camera on the workpiece placement area and detect the target object within that region. When the robot receives the command to grasp the workpiece, the improved Faster R-CNN recognizes the workpiece and guides the robot towards the target location. Subsequently, the depth camera on the robot determines the grasping pose using Generative Residual Convolutional Neural Network and performs the grasping action. We validate the feasibility and effectiveness of our proposed framework through experiments involving collaborative assembly tasks using two robotic arms.","PeriodicalId":20663,"journal":{"name":"Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture","volume":null,"pages":null},"PeriodicalIF":1.9000,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture","FirstCategoryId":"5","ListUrlMain":"https://doi.org/10.1177/09544054241249217","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0
Abstract
Visual grasping technology plays a crucial role in various robotic applications, such as industrial automation, warehousing, and logistics. However, current visual grasping methods face limitations when applied in industrial scenarios. Focusing solely on the workspace where the grasping target is located restricts the camera’s ability to provide additional environmental information. On the other hand, monitoring the entire working area introduces irrelevant data and hinders accurate grasping pose estimation. In this paper, we propose a novel approach that combines a global camera and a depth camera to enable efficient target grasping. Specifically, we introduce a dual-layer detection mechanism based on Faster R-CNN–GRCNN. By enhancing the Faster R-CNN with attention mechanisms, we focus the global camera on the workpiece placement area and detect the target object within that region. When the robot receives the command to grasp the workpiece, the improved Faster R-CNN recognizes the workpiece and guides the robot towards the target location. Subsequently, the depth camera on the robot determines the grasping pose using Generative Residual Convolutional Neural Network and performs the grasping action. We validate the feasibility and effectiveness of our proposed framework through experiments involving collaborative assembly tasks using two robotic arms.
期刊介绍:
Manufacturing industries throughout the world are changing very rapidly. New concepts and methods are being developed and exploited to enable efficient and effective manufacturing. Existing manufacturing processes are being improved to meet the requirements of lean and agile manufacturing. The aim of the Journal of Engineering Manufacture is to provide a focus for these developments in engineering manufacture by publishing original papers and review papers covering technological and scientific research, developments and management implementation in manufacturing. This journal is also peer reviewed.
Contributions are welcomed in the broad areas of manufacturing processes, manufacturing technology and factory automation, digital manufacturing, design and manufacturing systems including management relevant to engineering manufacture. Of particular interest at the present time would be papers concerned with digital manufacturing, metrology enabled manufacturing, smart factory, additive manufacturing and composites as well as specialist manufacturing fields like nanotechnology, sustainable & clean manufacturing and bio-manufacturing.
Articles may be Research Papers, Reviews, Technical Notes, or Short Communications.