Weihao Zhang, Jiapeng Wang, Honglin Ma, Qi Zhang, Shuqian Fan
{"title":"A Transformer-Based Approach for Metal 3d Printing Quality Recognition","authors":"Weihao Zhang, Jiapeng Wang, Honglin Ma, Qi Zhang, Shuqian Fan","doi":"10.1109/ICMEW56448.2022.9859324","DOIUrl":null,"url":null,"abstract":"The mass unlabeled production data hinders the large-scale application of advanced supervised learning techniques in the modern industry. Metal 3D printing generates huge amounts of in-situ data that are closely related to the forming quality of parts. In order to solve the problem of labor cost caused by re-labeling dataset when changing printing materials and process parameters, a forming quality recognition model based on deep clustering is designed, which makes the forming quality recognition task of metal 3D printing more flexible. Inspired by the success of Vision Transformer, we introduce convolutional neural networks into the Vision Transformer structure to model the inductive bias of images while learning the global representations. Our approach achieves state-of-the-art accuracy over the other Vision Transformer-based models. In addition, our proposed framework is a good candidate for specific industrial vision tasks where annotations are scarce.","PeriodicalId":106759,"journal":{"name":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Multimedia and Expo Workshops (ICMEW)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMEW56448.2022.9859324","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The mass unlabeled production data hinders the large-scale application of advanced supervised learning techniques in the modern industry. Metal 3D printing generates huge amounts of in-situ data that are closely related to the forming quality of parts. In order to solve the problem of labor cost caused by re-labeling dataset when changing printing materials and process parameters, a forming quality recognition model based on deep clustering is designed, which makes the forming quality recognition task of metal 3D printing more flexible. Inspired by the success of Vision Transformer, we introduce convolutional neural networks into the Vision Transformer structure to model the inductive bias of images while learning the global representations. Our approach achieves state-of-the-art accuracy over the other Vision Transformer-based models. In addition, our proposed framework is a good candidate for specific industrial vision tasks where annotations are scarce.