Adversarial Mixup-Based Contrast Learning for Data-Driven Predictive Maintenance in Long-Tailed Recognition

IF 10.9 2区 计算机科学 Q1 ENGINEERING, ELECTRICAL & ELECTRONIC
Ru Peng;Xingyu Chen;Xuguang Lan
{"title":"Adversarial Mixup-Based Contrast Learning for Data-Driven Predictive Maintenance in Long-Tailed Recognition","authors":"Ru Peng;Xingyu Chen;Xuguang Lan","doi":"10.1109/TCE.2025.3563895","DOIUrl":null,"url":null,"abstract":"Deep neural networks have achieved remarkable success in various computer vision tasks. However, in real-world applications, such as the Internet of Things (IoT), these models often struggle due to the long-tailed data distributions. For instance, in scenarios such as Holographic Counterpart Integration in IoT-based predictive maintenance for home systems or smart repair services, common operational states are prevalent in the dataset. In contrast, rare failures, such as hardware malfunctions or system breakdowns, are represented by only a few samples. This imbalance severely impacts models, making it difficult to accurately predict rare failures, leading to costly downtime or unanticipated equipment failure. Current contrastive learning-based methods are effective at optimizing feature distributions but often overlook inter-class relationships and are highly sensitive to class imbalance, which limits their generalization ability. To address these challenges, we propose the Adversarial Mixup-based supervised contrast learning (AMCL) framework, which integrates Mixup-based data augmentation with contrastive learning and incorporates an adversarial-inspired sample policy generator. AMCL generates boundary samples via a dynamically optimized Mixup strategy to enhance inter-class relationship modeling and improve predictions on ambiguous boundaries. Furthermore, we introduce a new MixCo loss function to account for the non-one-hot distribution of Mixup-generated targets, ensuring better alignment with augmented data and improving optimization efficiency. AMCL is easy to implement and achieves a performance superior to recent approaches for long-tailed recognition across various datasets such as ImageNet-LT, iNaturalist18, CIFAR-10-LT, and CIFAR-100-LT.","PeriodicalId":13208,"journal":{"name":"IEEE Transactions on Consumer Electronics","volume":"71 2","pages":"5249-5258"},"PeriodicalIF":10.9000,"publicationDate":"2025-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Consumer Electronics","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10975772/","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

Deep neural networks have achieved remarkable success in various computer vision tasks. However, in real-world applications, such as the Internet of Things (IoT), these models often struggle due to the long-tailed data distributions. For instance, in scenarios such as Holographic Counterpart Integration in IoT-based predictive maintenance for home systems or smart repair services, common operational states are prevalent in the dataset. In contrast, rare failures, such as hardware malfunctions or system breakdowns, are represented by only a few samples. This imbalance severely impacts models, making it difficult to accurately predict rare failures, leading to costly downtime or unanticipated equipment failure. Current contrastive learning-based methods are effective at optimizing feature distributions but often overlook inter-class relationships and are highly sensitive to class imbalance, which limits their generalization ability. To address these challenges, we propose the Adversarial Mixup-based supervised contrast learning (AMCL) framework, which integrates Mixup-based data augmentation with contrastive learning and incorporates an adversarial-inspired sample policy generator. AMCL generates boundary samples via a dynamically optimized Mixup strategy to enhance inter-class relationship modeling and improve predictions on ambiguous boundaries. Furthermore, we introduce a new MixCo loss function to account for the non-one-hot distribution of Mixup-generated targets, ensuring better alignment with augmented data and improving optimization efficiency. AMCL is easy to implement and achieves a performance superior to recent approaches for long-tailed recognition across various datasets such as ImageNet-LT, iNaturalist18, CIFAR-10-LT, and CIFAR-100-LT.
基于对抗性混合的长尾识别中数据驱动的预测性维护对比学习
深度神经网络在各种计算机视觉任务中取得了显著的成功。然而,在诸如物联网(IoT)之类的实际应用中,由于长尾数据分布,这些模型通常会遇到困难。例如,在基于物联网的家庭系统预测性维护或智能维修服务中的全息对偶集成等场景中,数据集中普遍存在常见的操作状态。相比之下,罕见的故障,如硬件故障或系统故障,仅由少数样本表示。这种不平衡严重影响了模型,使得难以准确预测罕见的故障,导致代价高昂的停机时间或意外的设备故障。目前基于对比学习的方法在优化特征分布方面是有效的,但往往忽略了类间关系,对类不平衡高度敏感,限制了其泛化能力。为了解决这些挑战,我们提出了基于对抗混合的监督对比学习(AMCL)框架,该框架将基于混合的数据增强与对比学习相结合,并结合了一个对抗启发的样本策略生成器。AMCL通过动态优化的Mixup策略生成边界样本,增强类间关系建模,改进对模糊边界的预测。此外,我们引入了一个新的MixCo损失函数来解释mixup生成的目标的非单热分布,确保更好地与增强数据对齐,提高优化效率。AMCL易于实现,并且在各种数据集(如ImageNet-LT、iNaturalist18、CIFAR-10-LT和CIFAR-100-LT)上的长尾识别性能优于最近的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
7.70
自引率
9.30%
发文量
59
审稿时长
3.3 months
期刊介绍: The main focus for the IEEE Transactions on Consumer Electronics is the engineering and research aspects of the theory, design, construction, manufacture or end use of mass market electronics, systems, software and services for consumers.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信