{"title":"Pipe environment elbow features extracting based on 2D Point Cloud","authors":"Yu Ding, Yifei Wu, Yinrui Ma","doi":"10.1109/ICCSI55536.2022.9970649","DOIUrl":null,"url":null,"abstract":"For autonomous pipeline robot applications, extracting the features in the pipeline environment such as the 90-degree elbow can greatly reduce the error of the pipeline robot odometer and improve the accuracy of the real-time positioning for autonomous pipeline robot. At present, iterative calculations are used in most of the features extracting methods such as least squares method, but with the huge amount of point cloud data, the computational complexity of these methods is high, and the amount of computation limits the application on embedded robots. For this problem, a network framework For the pipe environment is proposed in this article, which is only for point cloud data input. Based on You Only Look Once v4-tiny(YOLOv4-tiny), a rapid 2D standard detection network framework for images expanding, the discrete 2D point cloud data in the form of bird's eye view is encoded in low-resolution as the input of the net and point of interest (POI) is detected and segmented for the extraction of the elbow features and the accurate estimation of the real-time positioning for the pipeline robot. Our experiments in narrow pipe environment show that compared with the current point cloud feature extraction methods, the proposed method is faster and more accurate.","PeriodicalId":421514,"journal":{"name":"2022 International Conference on Cyber-Physical Social Intelligence (ICCSI)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Cyber-Physical Social Intelligence (ICCSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSI55536.2022.9970649","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
For autonomous pipeline robot applications, extracting the features in the pipeline environment such as the 90-degree elbow can greatly reduce the error of the pipeline robot odometer and improve the accuracy of the real-time positioning for autonomous pipeline robot. At present, iterative calculations are used in most of the features extracting methods such as least squares method, but with the huge amount of point cloud data, the computational complexity of these methods is high, and the amount of computation limits the application on embedded robots. For this problem, a network framework For the pipe environment is proposed in this article, which is only for point cloud data input. Based on You Only Look Once v4-tiny(YOLOv4-tiny), a rapid 2D standard detection network framework for images expanding, the discrete 2D point cloud data in the form of bird's eye view is encoded in low-resolution as the input of the net and point of interest (POI) is detected and segmented for the extraction of the elbow features and the accurate estimation of the real-time positioning for the pipeline robot. Our experiments in narrow pipe environment show that compared with the current point cloud feature extraction methods, the proposed method is faster and more accurate.
对于自主管道机器人应用,提取90度弯头等管道环境中的特征,可以大大降低管道机器人里程表的误差,提高自主管道机器人实时定位的精度。目前,大多数特征提取方法都采用迭代计算,如最小二乘法,但由于点云数据量巨大,这些方法的计算复杂度较高,计算量限制了在嵌入式机器人上的应用。针对这一问题,本文提出了一种管道环境的网络框架,该框架仅用于点云数据的输入。基于You Only Look Once v4-tiny(YOLOv4-tiny)快速二维标准图像扩展检测网络框架,将以鸟瞰图为形式的离散二维点云数据进行低分辨率编码作为网络输入,对兴趣点(POI)进行检测和分割,提取管道机器人的弯头特征,准确估计管道机器人的实时定位。在窄管环境下的实验表明,与现有的点云特征提取方法相比,本文提出的方法更快、更准确。