END-TO-END PREDICTION OF WELD PENETRATION IN REAL TIME BASED ON DEEP LEARNING

Pub Date : 1900-01-01 DOI:10.13023/ETD.2020.142
Wenhua Jiao
{"title":"END-TO-END PREDICTION OF WELD PENETRATION IN REAL TIME BASED ON DEEP LEARNING","authors":"Wenhua Jiao","doi":"10.13023/ETD.2020.142","DOIUrl":null,"url":null,"abstract":"OF DISSERTATION END-TO-END PREDICTION OF WELD PENETRATION IN REAL TIME BASED ON DEEP LEARNING Welding is an important joining technique that has been automated/robotized. In automated/robotic welding applications, however, the parameters are preset and are not adaptively adjusted to overcome unpredicted disturbances, which cause these applications to not be able to meet the standards from welding/manufacturing industry in terms of quality, efficiency, and individuality. Combining information sensing and processing with traditional welding techniques is a significant step toward revolutionizing the welding industry. In practical welding, the weld penetration as measured by the back-side bead width is a critical factor when determining the integrity of the weld produced. However, the back-side bead width is difficult to be directly monitored during manufacturing because it occurs underneath the surface of the welded workpiece. Therefore, predicting back-side bead width based on conveniently sensible information from the welding process is a fundamental issue in intelligent welding. Traditional research methods involve an indirect process that includes defining and extracting key characteristic information from the sensed data and building a model to predict the target information from the characteristic information. Due to a lack of feature information, the cumulative error of the extracted information and the complex sensing process directly affect prediction accuracy and real-time performance. An end-to-end, datadriven prediction system is proposed to predict the weld penetration status from top-side images during welding. In this method, a passive-vision sensing system with two cameras to simultaneously monitor the top-side and back-bead information is developed. Then the weld joints are classified into three classes (i.e., under penetration, desirable penetration, and excessive penetration) according to the back-bead width. Taking the weld pool-arc images as inputs and corresponding penetration statuses as labels, an end-to-end convolutional neural network (CNN) is designed and trained so the features are automatically defined and extracted. In order to increase accuracy and training speed, a transfer learning approach based on a residual neural network (ResNet) is developed. This ResNet-based model is pretrained on an ImageNet dataset to process a better feature-extracting ability, and its fully connected layers are modified based on our own dataset. Our experiments show that this transfer learning approach can decrease training time and improve performance. Furthermore, this study proposes that the present weld pool-arc image is fused with two previous images that were acquired 1/6s and 2/6s earlier. The fused single image thus reflects the dynamic welding phenomena, and prediction accuracy is significantly improved with the image-sequence data by fusing temporal information to the input layer of the CNN (early fusion). Due to the critical role of weld penetration and the negligible impact on system implementation, this method represents major progress in the field of weld-penetration monitoring and is expected to provide more significant improvements during welding using pulsed current where the process becomes highly dynamic.","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.13023/ETD.2020.142","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

OF DISSERTATION END-TO-END PREDICTION OF WELD PENETRATION IN REAL TIME BASED ON DEEP LEARNING Welding is an important joining technique that has been automated/robotized. In automated/robotic welding applications, however, the parameters are preset and are not adaptively adjusted to overcome unpredicted disturbances, which cause these applications to not be able to meet the standards from welding/manufacturing industry in terms of quality, efficiency, and individuality. Combining information sensing and processing with traditional welding techniques is a significant step toward revolutionizing the welding industry. In practical welding, the weld penetration as measured by the back-side bead width is a critical factor when determining the integrity of the weld produced. However, the back-side bead width is difficult to be directly monitored during manufacturing because it occurs underneath the surface of the welded workpiece. Therefore, predicting back-side bead width based on conveniently sensible information from the welding process is a fundamental issue in intelligent welding. Traditional research methods involve an indirect process that includes defining and extracting key characteristic information from the sensed data and building a model to predict the target information from the characteristic information. Due to a lack of feature information, the cumulative error of the extracted information and the complex sensing process directly affect prediction accuracy and real-time performance. An end-to-end, datadriven prediction system is proposed to predict the weld penetration status from top-side images during welding. In this method, a passive-vision sensing system with two cameras to simultaneously monitor the top-side and back-bead information is developed. Then the weld joints are classified into three classes (i.e., under penetration, desirable penetration, and excessive penetration) according to the back-bead width. Taking the weld pool-arc images as inputs and corresponding penetration statuses as labels, an end-to-end convolutional neural network (CNN) is designed and trained so the features are automatically defined and extracted. In order to increase accuracy and training speed, a transfer learning approach based on a residual neural network (ResNet) is developed. This ResNet-based model is pretrained on an ImageNet dataset to process a better feature-extracting ability, and its fully connected layers are modified based on our own dataset. Our experiments show that this transfer learning approach can decrease training time and improve performance. Furthermore, this study proposes that the present weld pool-arc image is fused with two previous images that were acquired 1/6s and 2/6s earlier. The fused single image thus reflects the dynamic welding phenomena, and prediction accuracy is significantly improved with the image-sequence data by fusing temporal information to the input layer of the CNN (early fusion). Due to the critical role of weld penetration and the negligible impact on system implementation, this method represents major progress in the field of weld-penetration monitoring and is expected to provide more significant improvements during welding using pulsed current where the process becomes highly dynamic.
分享
查看原文
基于深度学习的焊缝熔透实时端到端预测
基于深度学习的焊缝熔透实时端到端预测是一种重要的焊接自动化/机器人化技术。然而,在自动化/机器人焊接应用中,参数是预先设定的,不能自适应调整以克服不可预测的干扰,这导致这些应用无法满足焊接/制造业在质量,效率和个性方面的标准。将信息传感和处理技术与传统焊接技术相结合,是焊接工业革命的重要一步。在实际焊接中,由后焊缝宽度测量的焊缝熔透是决定焊缝完整性的关键因素。然而,在制造过程中,由于背面焊头宽度发生在焊接工件表面之下,因此难以直接监测。因此,基于焊接过程中方便感知的信息预测后焊缝宽度是智能焊接的一个基本问题。传统的研究方法是一个间接的过程,即从传感数据中定义和提取关键特征信息,并建立模型,从特征信息中预测目标信息。由于特征信息的缺乏,提取信息的累积误差和复杂的感知过程直接影响预测精度和实时性。提出了一种端到端、数据驱动的预测系统,利用焊接过程中的顶侧图像预测焊缝的熔透状态。在该方法中,开发了一种具有两个摄像头的被动视觉传感系统,可以同时监测车辆的顶部和后部信息。然后根据后焊头宽度将焊缝分为欠焊、理想焊和过焊三类。以熔池电弧图像为输入,以熔池电弧状态为标签,设计并训练端到端卷积神经网络(CNN),实现特征的自动定义和提取。为了提高训练精度和速度,提出了一种基于残差神经网络(ResNet)的迁移学习方法。这个基于resnet的模型在ImageNet数据集上进行预训练,以处理更好的特征提取能力,并根据我们自己的数据集修改其全连接层。我们的实验表明,这种迁移学习方法可以减少训练时间,提高性能。此外,本文还提出了将当前的焊池电弧图像与1/6s和2/6s之前获取的两幅图像进行融合。通过将时间信息融合到CNN的输入层(早期融合)中,可以显著提高图像序列数据的预测精度。由于焊透的关键作用和对系统实施的可忽略不计的影响,该方法代表了焊透监测领域的重大进展,并有望在使用脉冲电流的焊接过程中提供更显著的改进,因为该过程变得高度动态。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信