Mutual-learning based self-supervised knowledge distillation framework for remaining useful life prediction under variable working condition-induced domain shift scenarios

IF 11 1区 工程技术 Q1 ENGINEERING, INDUSTRIAL
Zhuohang Chen , Jinglong Chen , Zijun Liu , Yulang Liu
{"title":"Mutual-learning based self-supervised knowledge distillation framework for remaining useful life prediction under variable working condition-induced domain shift scenarios","authors":"Zhuohang Chen ,&nbsp;Jinglong Chen ,&nbsp;Zijun Liu ,&nbsp;Yulang Liu","doi":"10.1016/j.ress.2025.111359","DOIUrl":null,"url":null,"abstract":"<div><div>Domain shifts induced by variable working conditions, including both multiple steady and time-varying working conditions, result in inconsistent degradation patterns and pose significant challenges for remaining useful life (RUL) prediction. To address the above issue, we propose a self-supervised knowledge distillation framework based on mutual learning for RUL prediction under variable working conditions. The proposed framework employs a teacher-student architecture, facilitating knowledge transfer through self-supervised pseudo-labels. A mutual learning-based training strategy is developed to prevent over-adaptation to the source domain and promote domain generalization. Additionally, during student model training, a feature-level domain adversarial training strategy is implemented to improve cross-domain feature decoupling and ensure the learning of domain-invariant features. The above two components complement each other: adversarial learning aligns marginal distributions (variable working conditions), while pseudo-label learning refines conditional alignment (normal and fast degradation stages), allowing the model to adapt more effectively to complex degradation scenarios. Furthermore, we incorporate a sparse attention mechanism for efficient feature extraction, significantly reducing computational complexity while maintaining robust performance. The RUL prediction experiments under multi steady conditions and time-varying conditions are carried out on two life-cycle bearing datasets respectively. Comparative results demonstrate the superiority and practicality of our proposed method.</div></div>","PeriodicalId":54500,"journal":{"name":"Reliability Engineering & System Safety","volume":"264 ","pages":"Article 111359"},"PeriodicalIF":11.0000,"publicationDate":"2025-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Reliability Engineering & System Safety","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0951832025005605","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, INDUSTRIAL","Score":null,"Total":0}
引用次数: 0

Abstract

Domain shifts induced by variable working conditions, including both multiple steady and time-varying working conditions, result in inconsistent degradation patterns and pose significant challenges for remaining useful life (RUL) prediction. To address the above issue, we propose a self-supervised knowledge distillation framework based on mutual learning for RUL prediction under variable working conditions. The proposed framework employs a teacher-student architecture, facilitating knowledge transfer through self-supervised pseudo-labels. A mutual learning-based training strategy is developed to prevent over-adaptation to the source domain and promote domain generalization. Additionally, during student model training, a feature-level domain adversarial training strategy is implemented to improve cross-domain feature decoupling and ensure the learning of domain-invariant features. The above two components complement each other: adversarial learning aligns marginal distributions (variable working conditions), while pseudo-label learning refines conditional alignment (normal and fast degradation stages), allowing the model to adapt more effectively to complex degradation scenarios. Furthermore, we incorporate a sparse attention mechanism for efficient feature extraction, significantly reducing computational complexity while maintaining robust performance. The RUL prediction experiments under multi steady conditions and time-varying conditions are carried out on two life-cycle bearing datasets respectively. Comparative results demonstrate the superiority and practicality of our proposed method.
基于互学习自监督知识精馏框架的变工况域移剩余使用寿命预测
可变工作条件(包括多种稳定和时变工作条件)引起的域位移导致不一致的退化模式,并对剩余使用寿命(RUL)预测提出了重大挑战。为了解决上述问题,我们提出了一种基于互学习的自监督知识蒸馏框架,用于可变工况下的规则学习预测。提出的框架采用师生架构,通过自我监督的伪标签促进知识转移。为了防止对源域的过度适应和促进域泛化,提出了一种基于互学习的训练策略。此外,在学生模型训练过程中,实现了特征级域对抗训练策略,以改善跨域特征解耦,并确保学习到域不变特征。上述两个组件相互补充:对抗性学习对齐边缘分布(可变工作条件),而伪标签学习改进条件对齐(正常和快速退化阶段),允许模型更有效地适应复杂的退化场景。此外,我们结合了一个稀疏注意机制来进行有效的特征提取,在保持稳健性能的同时显著降低了计算复杂度。分别在两个全寿命轴承数据集上进行了多稳态和时变条件下的RUL预测实验。对比结果表明了该方法的优越性和实用性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Reliability Engineering & System Safety
Reliability Engineering & System Safety 管理科学-工程:工业
CiteScore
15.20
自引率
39.50%
发文量
621
审稿时长
67 days
期刊介绍: Elsevier publishes Reliability Engineering & System Safety in association with the European Safety and Reliability Association and the Safety Engineering and Risk Analysis Division. The international journal is devoted to developing and applying methods to enhance the safety and reliability of complex technological systems, like nuclear power plants, chemical plants, hazardous waste facilities, space systems, offshore and maritime systems, transportation systems, constructed infrastructure, and manufacturing plants. The journal normally publishes only articles that involve the analysis of substantive problems related to the reliability of complex systems or present techniques and/or theoretical results that have a discernable relationship to the solution of such problems. An important aim is to balance academic material and practical applications.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信