Matthew McKinney , Anthony Garland , Dale Cillessen , Jesse Adamczyk , Dan Bolintineanu , Michael Heiden , Elliott Fowler , Brad L. Boyce
{"title":"Unsupervised multimodal fusion of in-process sensor data for advanced manufacturing process monitoring","authors":"Matthew McKinney , Anthony Garland , Dale Cillessen , Jesse Adamczyk , Dan Bolintineanu , Michael Heiden , Elliott Fowler , Brad L. Boyce","doi":"10.1016/j.jmsy.2024.12.003","DOIUrl":null,"url":null,"abstract":"<div><div>Effective monitoring of manufacturing processes is crucial for maintaining product quality and operational efficiency. Modern manufacturing environments often generate vast amounts of complementary multimodal data, including visual imagery from various perspectives and resolutions, hyperspectral data, and machine health monitoring information such as actuator positions, accelerometer readings, and temperature measurements. However, fusing and interpreting this complex, high-dimensional data presents significant challenges, particularly when labeled datasets are unavailable or impractical to obtain. This paper presents a novel approach to multimodal sensor data fusion in manufacturing processes, inspired by the Contrastive Language-Image Pre-training (CLIP) model. We leverage contrastive learning techniques to correlate different data modalities without the need for labeled data, overcoming limitations of traditional supervised machine learning methods in manufacturing contexts. Our proposed method demonstrates the ability to handle and learn encoders for five distinct modalities: visual imagery, audio signals, laser position (x and y coordinates), and laser power measurements. By compressing these high-dimensional datasets into low-dimensional representational spaces, our approach facilitates downstream tasks such as process control, anomaly detection, and quality assurance. The unsupervised nature of our method makes it broadly applicable across various manufacturing domains, where large volumes of unlabeled sensor data are common. We evaluate the effectiveness of our approach through a series of experiments, demonstrating its potential to enhance process monitoring capabilities in advanced manufacturing systems. This research contributes to the field of smart manufacturing by providing a flexible, scalable framework for multimodal data fusion that can adapt to diverse manufacturing environments and sensor configurations. The proposed method paves the way for more robust, data-driven decision-making in complex manufacturing processes.</div></div>","PeriodicalId":16227,"journal":{"name":"Journal of Manufacturing Systems","volume":"78 ","pages":"Pages 271-282"},"PeriodicalIF":12.2000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Manufacturing Systems","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0278612524002966","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0
Abstract
Effective monitoring of manufacturing processes is crucial for maintaining product quality and operational efficiency. Modern manufacturing environments often generate vast amounts of complementary multimodal data, including visual imagery from various perspectives and resolutions, hyperspectral data, and machine health monitoring information such as actuator positions, accelerometer readings, and temperature measurements. However, fusing and interpreting this complex, high-dimensional data presents significant challenges, particularly when labeled datasets are unavailable or impractical to obtain. This paper presents a novel approach to multimodal sensor data fusion in manufacturing processes, inspired by the Contrastive Language-Image Pre-training (CLIP) model. We leverage contrastive learning techniques to correlate different data modalities without the need for labeled data, overcoming limitations of traditional supervised machine learning methods in manufacturing contexts. Our proposed method demonstrates the ability to handle and learn encoders for five distinct modalities: visual imagery, audio signals, laser position (x and y coordinates), and laser power measurements. By compressing these high-dimensional datasets into low-dimensional representational spaces, our approach facilitates downstream tasks such as process control, anomaly detection, and quality assurance. The unsupervised nature of our method makes it broadly applicable across various manufacturing domains, where large volumes of unlabeled sensor data are common. We evaluate the effectiveness of our approach through a series of experiments, demonstrating its potential to enhance process monitoring capabilities in advanced manufacturing systems. This research contributes to the field of smart manufacturing by providing a flexible, scalable framework for multimodal data fusion that can adapt to diverse manufacturing environments and sensor configurations. The proposed method paves the way for more robust, data-driven decision-making in complex manufacturing processes.
期刊介绍:
The Journal of Manufacturing Systems is dedicated to showcasing cutting-edge fundamental and applied research in manufacturing at the systems level. Encompassing products, equipment, people, information, control, and support functions, manufacturing systems play a pivotal role in the economical and competitive development, production, delivery, and total lifecycle of products, meeting market and societal needs.
With a commitment to publishing archival scholarly literature, the journal strives to advance the state of the art in manufacturing systems and foster innovation in crafting efficient, robust, and sustainable manufacturing systems. The focus extends from equipment-level considerations to the broader scope of the extended enterprise. The Journal welcomes research addressing challenges across various scales, including nano, micro, and macro-scale manufacturing, and spanning diverse sectors such as aerospace, automotive, energy, and medical device manufacturing.