Zeming Zhang , Chuanyang Li , Changhua Hu , Jianhui Hu , Meng Zhao , Mingzhe Leng , Zhaoqiang Wang , Xinyi Wan , Ziyang Zheng
{"title":"Liquid-transformer temporal self-supervised network for few-shot class-incremental fault diagnosis of servo mechanisms","authors":"Zeming Zhang , Chuanyang Li , Changhua Hu , Jianhui Hu , Meng Zhao , Mingzhe Leng , Zhaoqiang Wang , Xinyi Wan , Ziyang Zheng","doi":"10.1016/j.measurement.2026.121078","DOIUrl":null,"url":null,"abstract":"<div><div>Electric servo mechanisms are critical actuators in aerospace equipment, where emerging fault types continuously appear under varying operating conditions. Few-shot class-incremental diagnosis is therefore constrained by two coupled challenges: highly nonstationary vibration responses and extremely limited labelled samples, both of which weaken temporal representation learning and aggravate catastrophic forgetting in conventional models. To address these issues, a Liquid-Transformer Temporal Self-Supervised Network (LTTSNet) is proposed for continual recognition of both base and emerging fault classes. Its backbone integrates a Liquid Neural Network (LNN) with a Lightweight Transformer (LTransformer), in which the LNN captures local transient dynamics through learnable time constants, whereas the LTransformer models long-range cross-cycle dependencies. In the base stage, a simple framework for contrastive learning of visual representations is employed to learn invariant representations from scarce unlabelled signals by contrasting augmented views. Pseudo-labels are generated via nearest-neighbour clustering under a cosine-similarity threshold and are jointly trained with labelled samples, thereby introducing latent new-class information before incremental updates. In the incremental stage, attention-weighted prototype estimation and one-step gradient prototype distillation are jointly employed to refine new-class prototypes. The backbone is kept frozen, and only the classifier head is updated, enabling rapid adaptation to new classes while preserving old-class discrimination. Experiments on a laboratory electric servo mechanism fault dataset and the Case Western Reserve University bearing dataset demonstrate that LTTSNet delivers significantly improves overall accuracy, new-class recognition, and forgetting suppression under cross-condition few-shot settings with both single and compound faults.</div></div>","PeriodicalId":18349,"journal":{"name":"Measurement","volume":"272 ","pages":"Article 121078"},"PeriodicalIF":5.6000,"publicationDate":"2026-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Measurement","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0263224126007876","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2026/3/7 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Electric servo mechanisms are critical actuators in aerospace equipment, where emerging fault types continuously appear under varying operating conditions. Few-shot class-incremental diagnosis is therefore constrained by two coupled challenges: highly nonstationary vibration responses and extremely limited labelled samples, both of which weaken temporal representation learning and aggravate catastrophic forgetting in conventional models. To address these issues, a Liquid-Transformer Temporal Self-Supervised Network (LTTSNet) is proposed for continual recognition of both base and emerging fault classes. Its backbone integrates a Liquid Neural Network (LNN) with a Lightweight Transformer (LTransformer), in which the LNN captures local transient dynamics through learnable time constants, whereas the LTransformer models long-range cross-cycle dependencies. In the base stage, a simple framework for contrastive learning of visual representations is employed to learn invariant representations from scarce unlabelled signals by contrasting augmented views. Pseudo-labels are generated via nearest-neighbour clustering under a cosine-similarity threshold and are jointly trained with labelled samples, thereby introducing latent new-class information before incremental updates. In the incremental stage, attention-weighted prototype estimation and one-step gradient prototype distillation are jointly employed to refine new-class prototypes. The backbone is kept frozen, and only the classifier head is updated, enabling rapid adaptation to new classes while preserving old-class discrimination. Experiments on a laboratory electric servo mechanism fault dataset and the Case Western Reserve University bearing dataset demonstrate that LTTSNet delivers significantly improves overall accuracy, new-class recognition, and forgetting suppression under cross-condition few-shot settings with both single and compound faults.
期刊介绍:
Contributions are invited on novel achievements in all fields of measurement and instrumentation science and technology. Authors are encouraged to submit novel material, whose ultimate goal is an advancement in the state of the art of: measurement and metrology fundamentals, sensors, measurement instruments, measurement and estimation techniques, measurement data processing and fusion algorithms, evaluation procedures and methodologies for plants and industrial processes, performance analysis of systems, processes and algorithms, mathematical models for measurement-oriented purposes, distributed measurement systems in a connected world.