Francis Liri, Austin Luu, K. George, Axel Angulo, Johnathan Dittloff
{"title":"Real-Time Dynamic Object Grasping with a Robotic Arm: A Design for Visually Impaired Persons","authors":"Francis Liri, Austin Luu, K. George, Axel Angulo, Johnathan Dittloff","doi":"10.1109/aiiot54504.2022.9817307","DOIUrl":null,"url":null,"abstract":"Robotic arms have increasingly been used in applications such as manufacturing and medical. Often, physically impaired individuals have difficulty completing tasks such as picking an object off a shelf or picking items from the refrigerator. They rely on caregivers and others to help them complete tasks. Therefore to address this issue, research is ongoing into how to improve the lives of such persons using robotic arms and other technologies. This work builds on existing research which utilizes object recognition and grasp detection components to identify a bottle and obtain its real world coordinates but did not fully integrate the solution with a robotic arm [1]. We fully integrate the object recognition and grasp detection components with a Dobot Magician robotic arm. Using an eye-to-hand translation approach, we determine the translation matrix using experimental results. We used an Intel RealSense D455 camera to generate images for object detection and grasp point detection. The grasp point coordinates are passed to the robotic arm which performs the translation before moving the arm to grasp the bottle. Our tests with the fully integrated robotic arm show that the solution is feasible and using the given translation and depth accuracy the robotic arm can pick a bottle placed randomly in a given area.","PeriodicalId":409264,"journal":{"name":"2022 IEEE World AI IoT Congress (AIIoT)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE World AI IoT Congress (AIIoT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/aiiot54504.2022.9817307","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Robotic arms have increasingly been used in applications such as manufacturing and medical. Often, physically impaired individuals have difficulty completing tasks such as picking an object off a shelf or picking items from the refrigerator. They rely on caregivers and others to help them complete tasks. Therefore to address this issue, research is ongoing into how to improve the lives of such persons using robotic arms and other technologies. This work builds on existing research which utilizes object recognition and grasp detection components to identify a bottle and obtain its real world coordinates but did not fully integrate the solution with a robotic arm [1]. We fully integrate the object recognition and grasp detection components with a Dobot Magician robotic arm. Using an eye-to-hand translation approach, we determine the translation matrix using experimental results. We used an Intel RealSense D455 camera to generate images for object detection and grasp point detection. The grasp point coordinates are passed to the robotic arm which performs the translation before moving the arm to grasp the bottle. Our tests with the fully integrated robotic arm show that the solution is feasible and using the given translation and depth accuracy the robotic arm can pick a bottle placed randomly in a given area.