{"title":"Enhancing prognostics for sparse labeled data using advanced contrastive self-supervised learning with downstream integration","authors":"","doi":"10.1016/j.engappai.2024.109268","DOIUrl":null,"url":null,"abstract":"<div><p>Data-driven Prognostics and Health Management (PHM) requires extensive and well-annotated datasets for developing algorithms that can estimate and predict the health state of systems. However, acquiring run-to-failure data is costly, time-consuming, and often lacks comprehensive sampling of failure states, limiting the effectiveness of PHM models. This paper explores the use of Self-Supervised Learning (SSL) in PHM, addressing key limitations and proposing a novel contrastive SSL approach using a nested siamese network structure to enhance degradation feature representation. The model’s performance with sparse data improves by integrating downstream task information, particularly Remaining Useful Life (RUL) prediction, into the siamese structure during SSL pre-training. This approach enforces a consistency condition that failure times for two samples from the same monitoring sequence be identical. The proposed method demonstrates superior performance on the PRONOSTIA bearing dataset, outperforming state-of-the-art methods even with sparse labeling. Furthermore, the study delves into the impact of the upstream–downstream relationship in learning processes, asserting that fine-tuning significantly enhances RUL prediction by leveraging the foundational behaviors established during pre-training. Fine-tuning refines the model’s ability to capture subtle degradation patterns by building on the initial feature representations learned in pre-training, thereby improving accuracy and robustness in RUL predictions. The generalizability of the proposed strategy is confirmed through an end-to-end tool wear prediction in a real industrial environment, illustrating the applicability of the proposed method across various datasets and models, and providing effective solutions for sparse data scenarios in prognostics.</p></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":7.5000,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S095219762401426X","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Data-driven Prognostics and Health Management (PHM) requires extensive and well-annotated datasets for developing algorithms that can estimate and predict the health state of systems. However, acquiring run-to-failure data is costly, time-consuming, and often lacks comprehensive sampling of failure states, limiting the effectiveness of PHM models. This paper explores the use of Self-Supervised Learning (SSL) in PHM, addressing key limitations and proposing a novel contrastive SSL approach using a nested siamese network structure to enhance degradation feature representation. The model’s performance with sparse data improves by integrating downstream task information, particularly Remaining Useful Life (RUL) prediction, into the siamese structure during SSL pre-training. This approach enforces a consistency condition that failure times for two samples from the same monitoring sequence be identical. The proposed method demonstrates superior performance on the PRONOSTIA bearing dataset, outperforming state-of-the-art methods even with sparse labeling. Furthermore, the study delves into the impact of the upstream–downstream relationship in learning processes, asserting that fine-tuning significantly enhances RUL prediction by leveraging the foundational behaviors established during pre-training. Fine-tuning refines the model’s ability to capture subtle degradation patterns by building on the initial feature representations learned in pre-training, thereby improving accuracy and robustness in RUL predictions. The generalizability of the proposed strategy is confirmed through an end-to-end tool wear prediction in a real industrial environment, illustrating the applicability of the proposed method across various datasets and models, and providing effective solutions for sparse data scenarios in prognostics.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.