Peng Zhang , Chao Kong , Yuanping Xu , Chaolong Zhang , Jin Jin , Tukun Li , Xiangqian Jiang , Dan Tang
{"title":"An Improved PointNet++ Based Method for 3D Point Cloud Geometric Features Segmentation in Mechanical Parts","authors":"Peng Zhang , Chao Kong , Yuanping Xu , Chaolong Zhang , Jin Jin , Tukun Li , Xiangqian Jiang , Dan Tang","doi":"10.1016/j.procir.2024.10.006","DOIUrl":null,"url":null,"abstract":"<div><div>The extraction of geometric features such as holes, arcs, and surfaces of mechanical parts is crucial for quality control. The existing methods for geometrical feature segmentations on 3D point clouds still have limitations, especially for simultaneously extracting multiple types of geometric features from comprehensive workpieces. To this end, this study investigates segmentation methods that take 3D point cloud datasets of mechanical parts as inputs, and employs an improved PointNet++ deep learning model to solve this extraction difficulty. Firstly, the Set Abstraction module in PointNet++ was modified by incorporating Self-Attention mechanisms to increase interactivity and global correlation among data points. Then, the local feature extraction Multilayer Perceptron (MLP) from PointNet-Transformer was integrated to enhance the feature extraction accuracy. Due to the inherent class imbalance issue, the Focal Tversky Loss is employed as the loss function to ensure that geometric features with relatively lower proportions can be fully trained. Finally, the Statistical filtering algorithm is utilized to mitigate noise and attenuate subtle irregularities, such that the smoothness of geometric features can be substantially enhanced. The experimental results demonstrate that the proposed model achieves an accuracy of 86.6% on geometric feature segmentations and a mean Intersection over Union (mIoU) of 0.84. The comparison with the original PointNet++ proves that the proposed method can improve accuracy and mIoU by 3.7% and 0.03 respectively.</div></div>","PeriodicalId":20535,"journal":{"name":"Procedia CIRP","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Procedia CIRP","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2212827124011429","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The extraction of geometric features such as holes, arcs, and surfaces of mechanical parts is crucial for quality control. The existing methods for geometrical feature segmentations on 3D point clouds still have limitations, especially for simultaneously extracting multiple types of geometric features from comprehensive workpieces. To this end, this study investigates segmentation methods that take 3D point cloud datasets of mechanical parts as inputs, and employs an improved PointNet++ deep learning model to solve this extraction difficulty. Firstly, the Set Abstraction module in PointNet++ was modified by incorporating Self-Attention mechanisms to increase interactivity and global correlation among data points. Then, the local feature extraction Multilayer Perceptron (MLP) from PointNet-Transformer was integrated to enhance the feature extraction accuracy. Due to the inherent class imbalance issue, the Focal Tversky Loss is employed as the loss function to ensure that geometric features with relatively lower proportions can be fully trained. Finally, the Statistical filtering algorithm is utilized to mitigate noise and attenuate subtle irregularities, such that the smoothness of geometric features can be substantially enhanced. The experimental results demonstrate that the proposed model achieves an accuracy of 86.6% on geometric feature segmentations and a mean Intersection over Union (mIoU) of 0.84. The comparison with the original PointNet++ proves that the proposed method can improve accuracy and mIoU by 3.7% and 0.03 respectively.