D. Kato, Ken Yoshitugu, N. Maeda, T. Hirogaki, E. Aoyama, Kenichi Takahashi
{"title":"基于卷积神经网络的大型工业机器人定位误差特征分析","authors":"D. Kato, Ken Yoshitugu, N. Maeda, T. Hirogaki, E. Aoyama, Kenichi Takahashi","doi":"10.1115/detc2021-68237","DOIUrl":null,"url":null,"abstract":"\n Most industrial robots are taught using the teaching playback method; therefore, they are unsuitable for use in variable production systems. Although offline teaching methods have been developed, they have not been practiced because of the low accuracy of the position and posture of the end-effector. Therefore, many studies have attempted to calibrate the position and posture but have not reached a practical level, as such methods consider the joint angle when the robot is stationary rather than the features during robot motion. Currently, it is easy to obtain servo information under numerical control operations owing to the Internet of Things technologies. In this study, we propose a method for obtaining servo information during robot motion and converting it into images to find features using a convolutional neural network (CNN). Herein, a large industrial robot was used. The three-dimensional coordinates of the end-effector were obtained using a laser tracker. The positioning error of the robot was accurately learned by the CNN. We extracted the features of the points where the positioning error was extremely large. By extracting the features of the X-axis positioning error using the CNN, the joint 1 current is a feature. This indicates that the vibration current in joint 1 is a factor in the X-axis positioning error.","PeriodicalId":23602,"journal":{"name":"Volume 2: 41st Computers and Information in Engineering Conference (CIE)","volume":"46 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Finding Features of Positioning Error for Large Industrial Robots Based on Convolutional Neural Network\",\"authors\":\"D. Kato, Ken Yoshitugu, N. Maeda, T. Hirogaki, E. Aoyama, Kenichi Takahashi\",\"doi\":\"10.1115/detc2021-68237\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\n Most industrial robots are taught using the teaching playback method; therefore, they are unsuitable for use in variable production systems. Although offline teaching methods have been developed, they have not been practiced because of the low accuracy of the position and posture of the end-effector. Therefore, many studies have attempted to calibrate the position and posture but have not reached a practical level, as such methods consider the joint angle when the robot is stationary rather than the features during robot motion. Currently, it is easy to obtain servo information under numerical control operations owing to the Internet of Things technologies. In this study, we propose a method for obtaining servo information during robot motion and converting it into images to find features using a convolutional neural network (CNN). Herein, a large industrial robot was used. The three-dimensional coordinates of the end-effector were obtained using a laser tracker. The positioning error of the robot was accurately learned by the CNN. We extracted the features of the points where the positioning error was extremely large. By extracting the features of the X-axis positioning error using the CNN, the joint 1 current is a feature. This indicates that the vibration current in joint 1 is a factor in the X-axis positioning error.\",\"PeriodicalId\":23602,\"journal\":{\"name\":\"Volume 2: 41st Computers and Information in Engineering Conference (CIE)\",\"volume\":\"46 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Volume 2: 41st Computers and Information in Engineering Conference (CIE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1115/detc2021-68237\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 2: 41st Computers and Information in Engineering Conference (CIE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2021-68237","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Finding Features of Positioning Error for Large Industrial Robots Based on Convolutional Neural Network
Most industrial robots are taught using the teaching playback method; therefore, they are unsuitable for use in variable production systems. Although offline teaching methods have been developed, they have not been practiced because of the low accuracy of the position and posture of the end-effector. Therefore, many studies have attempted to calibrate the position and posture but have not reached a practical level, as such methods consider the joint angle when the robot is stationary rather than the features during robot motion. Currently, it is easy to obtain servo information under numerical control operations owing to the Internet of Things technologies. In this study, we propose a method for obtaining servo information during robot motion and converting it into images to find features using a convolutional neural network (CNN). Herein, a large industrial robot was used. The three-dimensional coordinates of the end-effector were obtained using a laser tracker. The positioning error of the robot was accurately learned by the CNN. We extracted the features of the points where the positioning error was extremely large. By extracting the features of the X-axis positioning error using the CNN, the joint 1 current is a feature. This indicates that the vibration current in joint 1 is a factor in the X-axis positioning error.