{"title":"Lifeisgood:基于标签内交换的机器故障诊断中非分布泛化的不变量特征学习","authors":"Zhenling Mo;Zijun Zhang;Kwok-Leung Tsui","doi":"10.1109/TCYB.2025.3578712","DOIUrl":null,"url":null,"abstract":"In machine fault diagnosis, conventional data-driven models trained by empirical risk minimization (ERM) often fail to generalize across domains with distinct data distributions caused by various machine operating conditions. One major reason is that ERM primarily focuses on informativeness of data labels and lacks sufficient attention on invariance of data features. To enable invariance on top of informativeness, a learning framework, learning invariant features via in-label swapping for generalizing out-of-distribution (Lifeisgood), is proposed in this study. Lifeisgood is inspired by a simple intuition that invariance can be assessed by checking changes in loss due to swapping certain entries of features with the same labels. Lifeisgood also enjoys a theoretical guarantee on improving testing domain performance under certain conditions based on a swapping 0-1 loss proposed in this work. To circumvent the training difficulties associated with the swapping 0-1 loss, a swapping cross-entropy loss is derived as a surrogate and theoretical justifications for such a relaxation are also provided. As a result, Lifeisgood can be employed conveniently to develop data-driven fault diagnosis models. In the experiments, Lifeisgood outperformed the majority of state-of-the-art methods in terms of average accuracy and exceeded the second-best by 25% in terms of the frequency of beating the generic ERM. The code is available at: <uri>https://github.com/mozhenling/doge-lifeisgood</uri>","PeriodicalId":13112,"journal":{"name":"IEEE Transactions on Cybernetics","volume":"55 8","pages":"3699-3712"},"PeriodicalIF":9.4000,"publicationDate":"2025-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Lifeisgood: Learning Invariant Features via In-Label Swapping for Generalizing Out-of-Distribution in Machine Fault Diagnosis\",\"authors\":\"Zhenling Mo;Zijun Zhang;Kwok-Leung Tsui\",\"doi\":\"10.1109/TCYB.2025.3578712\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In machine fault diagnosis, conventional data-driven models trained by empirical risk minimization (ERM) often fail to generalize across domains with distinct data distributions caused by various machine operating conditions. One major reason is that ERM primarily focuses on informativeness of data labels and lacks sufficient attention on invariance of data features. To enable invariance on top of informativeness, a learning framework, learning invariant features via in-label swapping for generalizing out-of-distribution (Lifeisgood), is proposed in this study. Lifeisgood is inspired by a simple intuition that invariance can be assessed by checking changes in loss due to swapping certain entries of features with the same labels. Lifeisgood also enjoys a theoretical guarantee on improving testing domain performance under certain conditions based on a swapping 0-1 loss proposed in this work. To circumvent the training difficulties associated with the swapping 0-1 loss, a swapping cross-entropy loss is derived as a surrogate and theoretical justifications for such a relaxation are also provided. As a result, Lifeisgood can be employed conveniently to develop data-driven fault diagnosis models. In the experiments, Lifeisgood outperformed the majority of state-of-the-art methods in terms of average accuracy and exceeded the second-best by 25% in terms of the frequency of beating the generic ERM. The code is available at: <uri>https://github.com/mozhenling/doge-lifeisgood</uri>\",\"PeriodicalId\":13112,\"journal\":{\"name\":\"IEEE Transactions on Cybernetics\",\"volume\":\"55 8\",\"pages\":\"3699-3712\"},\"PeriodicalIF\":9.4000,\"publicationDate\":\"2025-06-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Transactions on Cybernetics\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11052882/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Cybernetics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11052882/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
Lifeisgood: Learning Invariant Features via In-Label Swapping for Generalizing Out-of-Distribution in Machine Fault Diagnosis
In machine fault diagnosis, conventional data-driven models trained by empirical risk minimization (ERM) often fail to generalize across domains with distinct data distributions caused by various machine operating conditions. One major reason is that ERM primarily focuses on informativeness of data labels and lacks sufficient attention on invariance of data features. To enable invariance on top of informativeness, a learning framework, learning invariant features via in-label swapping for generalizing out-of-distribution (Lifeisgood), is proposed in this study. Lifeisgood is inspired by a simple intuition that invariance can be assessed by checking changes in loss due to swapping certain entries of features with the same labels. Lifeisgood also enjoys a theoretical guarantee on improving testing domain performance under certain conditions based on a swapping 0-1 loss proposed in this work. To circumvent the training difficulties associated with the swapping 0-1 loss, a swapping cross-entropy loss is derived as a surrogate and theoretical justifications for such a relaxation are also provided. As a result, Lifeisgood can be employed conveniently to develop data-driven fault diagnosis models. In the experiments, Lifeisgood outperformed the majority of state-of-the-art methods in terms of average accuracy and exceeded the second-best by 25% in terms of the frequency of beating the generic ERM. The code is available at: https://github.com/mozhenling/doge-lifeisgood
期刊介绍:
The scope of the IEEE Transactions on Cybernetics includes computational approaches to the field of cybernetics. Specifically, the transactions welcomes papers on communication and control across machines or machine, human, and organizations. The scope includes such areas as computational intelligence, computer vision, neural networks, genetic algorithms, machine learning, fuzzy systems, cognitive systems, decision making, and robotics, to the extent that they contribute to the theme of cybernetics or demonstrate an application of cybernetics principles.