TF-F-GAN: A GAN-based model to predict the assembly physical fields under multi-modal variables fusion on vision transformer

IF 8 1区 工程技术 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
Yuming Liu , Wencai Yu , Qingyuan Lin , Wei Wang , Ende Ge , Aihua Su , Yong Zhao
{"title":"TF-F-GAN: A GAN-based model to predict the assembly physical fields under multi-modal variables fusion on vision transformer","authors":"Yuming Liu ,&nbsp;Wencai Yu ,&nbsp;Qingyuan Lin ,&nbsp;Wei Wang ,&nbsp;Ende Ge ,&nbsp;Aihua Su ,&nbsp;Yong Zhao","doi":"10.1016/j.aei.2024.102871","DOIUrl":null,"url":null,"abstract":"<div><div>Assembly is the final step in ensuring the precision and performance of mechanical products. Geometric variables, process variables, and other material or physical variables during the assembly process can all impact the assembly outcome. Therefore, the key for analyzing and predicting assembly results lies in establishing the mapping relationship between various assembly variables and the results. Traditional analysis methods typically consider the evolution of a single variable in relation to the assembly results and often focus on the value at a few nodes. Essentially, this approach constructs a value-to-value nonlinear mapping model, ignoring the coupling relationships between different variables. However, with the increase in assembly precision requirements and advancements in measurement equipment, assembly analysis has evolved from value-to-value prediction to field-to-field prediction. This shift necessitates the study of the assembly physical field results for specific regions rather than focusing on a few nodes. Therefore, this paper proposes an analysis framework, TF-F-GAN (Transformer-based- Field-Generative adversarial network), which is suitable for multi-source assembly variable inputs and physical field outputs. The framework draws inspiration from multimodal fusion and text-image generation models, leveraging the Vision Transformer (VIT) network to integrate multi-source heterogeneous data from the assembly process. The physical field data is color-mapped into a cloud image format, transforming the physical field prediction into a cloud image generation problem. The CFRP bolted joint structure assembly is used as a case study in this paper. Since assembly accuracy primarily focuses on geometric deformation, the deformation field of key regions in the CFRP bolted joint is taken as the output variable. In the case study, the geometric deviations of parts and mechanical behavior during the assembly process were considered. Data augmentation methods were used to construct the dataset. After training TF-F-GAN on this dataset, transfer learning was further conducted using experimental data. The final prediction error of TF-F-GAN relative to the experimental data was less than 15 %, with a computation time of less than 7 s. This prediction framework can serve as an effective tool for predicting the physical fields of general mechanical product assembly.</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"62 ","pages":"Article 102871"},"PeriodicalIF":8.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034624005196","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Assembly is the final step in ensuring the precision and performance of mechanical products. Geometric variables, process variables, and other material or physical variables during the assembly process can all impact the assembly outcome. Therefore, the key for analyzing and predicting assembly results lies in establishing the mapping relationship between various assembly variables and the results. Traditional analysis methods typically consider the evolution of a single variable in relation to the assembly results and often focus on the value at a few nodes. Essentially, this approach constructs a value-to-value nonlinear mapping model, ignoring the coupling relationships between different variables. However, with the increase in assembly precision requirements and advancements in measurement equipment, assembly analysis has evolved from value-to-value prediction to field-to-field prediction. This shift necessitates the study of the assembly physical field results for specific regions rather than focusing on a few nodes. Therefore, this paper proposes an analysis framework, TF-F-GAN (Transformer-based- Field-Generative adversarial network), which is suitable for multi-source assembly variable inputs and physical field outputs. The framework draws inspiration from multimodal fusion and text-image generation models, leveraging the Vision Transformer (VIT) network to integrate multi-source heterogeneous data from the assembly process. The physical field data is color-mapped into a cloud image format, transforming the physical field prediction into a cloud image generation problem. The CFRP bolted joint structure assembly is used as a case study in this paper. Since assembly accuracy primarily focuses on geometric deformation, the deformation field of key regions in the CFRP bolted joint is taken as the output variable. In the case study, the geometric deviations of parts and mechanical behavior during the assembly process were considered. Data augmentation methods were used to construct the dataset. After training TF-F-GAN on this dataset, transfer learning was further conducted using experimental data. The final prediction error of TF-F-GAN relative to the experimental data was less than 15 %, with a computation time of less than 7 s. This prediction framework can serve as an effective tool for predicting the physical fields of general mechanical product assembly.

Abstract Image

TF-F-GAN:基于 GAN 的模型,用于预测视觉变压器多模态变量融合下的装配物理场
装配是确保机械产品精度和性能的最后一步。装配过程中的几何变量、工艺变量和其他材料或物理变量都会影响装配结果。因此,分析和预测装配结果的关键在于建立各种装配变量与结果之间的映射关系。传统的分析方法通常只考虑单个变量与装配结果之间的演变关系,而且往往只关注几个节点的值。从本质上讲,这种方法构建了一个从值到值的非线性映射模型,忽略了不同变量之间的耦合关系。然而,随着装配精度要求的提高和测量设备的进步,装配分析已从值到值预测发展到场到场预测。这种转变要求对特定区域的装配物理场结果进行研究,而不是只关注几个节点。因此,本文提出了一个分析框架 TF-F-GAN(基于变压器的现场生成对抗网络),它适用于多源装配变量输入和物理场输出。该框架从多模态融合和文本图像生成模型中汲取灵感,利用视觉变换器(VIT)网络整合装配过程中的多源异构数据。物理现场数据被彩色映射为云图像格式,从而将物理现场预测转化为云图像生成问题。本文以 CFRP 螺栓连接结构装配为例进行研究。由于装配精度主要关注几何变形,因此将 CFRP 螺栓连接关键区域的变形场作为输出变量。在案例研究中,考虑了部件的几何偏差和装配过程中的机械行为。使用数据增强方法构建数据集。在该数据集上训练 TF-F-GAN 后,使用实验数据进一步进行迁移学习。相对于实验数据,TF-F-GAN 的最终预测误差小于 15%,计算时间小于 7 秒。该预测框架可作为预测一般机械产品装配物理场的有效工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Advanced Engineering Informatics
Advanced Engineering Informatics 工程技术-工程:综合
CiteScore
12.40
自引率
18.20%
发文量
292
审稿时长
45 days
期刊介绍: Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信