A deep learning-enhanced in-situ surface topography measurement method based on the focus variation microscopy and the industrial camera for material extrusion-based additive manufacturing

IF 3.7 2区 工程技术 Q2 ENGINEERING, MANUFACTURING
Kexin Yin, Yuchu Qin, Shan Lou, Paul Scott, Xiangqian Jiang
{"title":"A deep learning-enhanced in-situ surface topography measurement method based on the focus variation microscopy and the industrial camera for material extrusion-based additive manufacturing","authors":"Kexin Yin,&nbsp;Yuchu Qin,&nbsp;Shan Lou,&nbsp;Paul Scott,&nbsp;Xiangqian Jiang","doi":"10.1016/j.precisioneng.2025.06.012","DOIUrl":null,"url":null,"abstract":"<div><div>Focus variation microscopy is a powerful tool but is limited in its applicability to in-situ states. A research gap exists in adapting focus variation microscopy with inexpensive, easy-to-operate cameras to enable rapid surface topography acquisition in online measurements. To address this, we propose a novel deep learning-enhanced framework, M2CNet, in which images captured by a conventional industrial camera are first aligned with microscopy images using feature-based image registration. These aligned images are then paired with high-precision point clouds using a multi-focus window sliding technique and finally mapped to 3D point clouds via convolutional neural networks. A case study involving the surface of PLA fabricated by FDM showed that the M2CNet-16 model achieved the best result, with an average surface roughness (Sq) error of 6.4%, a Pearson correlation of 83.5%, and a processing time of 2.61 s. These results indicate that M2CNet improves training and prediction efficiency while maintaining state-of-the-art performance. Findings validate the feasibility of using simple cameras for high-precision topography measurements in material extrusion-based additive manufacturing.</div></div>","PeriodicalId":54589,"journal":{"name":"Precision Engineering-Journal of the International Societies for Precision Engineering and Nanotechnology","volume":"96 ","pages":"Pages 464-475"},"PeriodicalIF":3.7000,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Precision Engineering-Journal of the International Societies for Precision Engineering and Nanotechnology","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141635925002004","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, MANUFACTURING","Score":null,"Total":0}
引用次数: 0

Abstract

Focus variation microscopy is a powerful tool but is limited in its applicability to in-situ states. A research gap exists in adapting focus variation microscopy with inexpensive, easy-to-operate cameras to enable rapid surface topography acquisition in online measurements. To address this, we propose a novel deep learning-enhanced framework, M2CNet, in which images captured by a conventional industrial camera are first aligned with microscopy images using feature-based image registration. These aligned images are then paired with high-precision point clouds using a multi-focus window sliding technique and finally mapped to 3D point clouds via convolutional neural networks. A case study involving the surface of PLA fabricated by FDM showed that the M2CNet-16 model achieved the best result, with an average surface roughness (Sq) error of 6.4%, a Pearson correlation of 83.5%, and a processing time of 2.61 s. These results indicate that M2CNet improves training and prediction efficiency while maintaining state-of-the-art performance. Findings validate the feasibility of using simple cameras for high-precision topography measurements in material extrusion-based additive manufacturing.
基于聚焦变焦显微镜和工业相机的深度学习增强材料挤压增材制造原位表面形貌测量方法
焦点变化显微镜是一种强大的工具,但在原位状态的适用性方面受到限制。在将焦点变化显微镜与廉价、易于操作的相机相适应以实现在线测量中的快速表面形貌获取方面存在研究空白。为了解决这个问题,我们提出了一种新的深度学习增强框架M2CNet,其中使用基于特征的图像配准首先将传统工业相机捕获的图像与显微镜图像对齐。然后使用多焦点窗口滑动技术将这些对齐的图像与高精度点云配对,最后通过卷积神经网络映射到3D点云。以FDM法制备PLA表面为例,结果表明,M2CNet-16模型得到的结果最好,平均表面粗糙度(Sq)误差为6.4%,Pearson相关系数为83.5%,加工时间为2.61 s。这些结果表明,M2CNet在保持最先进性能的同时提高了训练和预测效率。研究结果验证了在基于材料挤压的增材制造中使用简单相机进行高精度形貌测量的可行性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.40
自引率
5.60%
发文量
177
审稿时长
46 days
期刊介绍: Precision Engineering - Journal of the International Societies for Precision Engineering and Nanotechnology is devoted to the multidisciplinary study and practice of high accuracy engineering, metrology, and manufacturing. The journal takes an integrated approach to all subjects related to research, design, manufacture, performance validation, and application of high precision machines, instruments, and components, including fundamental and applied research and development in manufacturing processes, fabrication technology, and advanced measurement science. The scope includes precision-engineered systems and supporting metrology over the full range of length scales, from atom-based nanotechnology and advanced lithographic technology to large-scale systems, including optical and radio telescopes and macrometrology.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信