Jun Fu;Xianrui Ji;Dexiong Chen;Guosheng Hu;Shuang Li;Xiating Feng
{"title":"AdvMixUp:深度学习的对抗性混合正则化。","authors":"Jun Fu;Xianrui Ji;Dexiong Chen;Guosheng Hu;Shuang Li;Xiating Feng","doi":"10.1109/TNNLS.2025.3562363","DOIUrl":null,"url":null,"abstract":"Deep neural networks (DNNs) have shown significant progress in many application fields. However, overfitting remains a significant challenge in their development. While existing data-augmentation techniques such as MixUp have been successful in preventing overfitting, they often fail to generate hard mixed samples near the decision boundary, impeding model optimization. In this article, we present adversarial MixUp (AdvMixUp), a novel sample-dependent method for regularizing DNNs. AdvMixUp addresses this issue by incorporating adversarial training (AT) to create sample-dependent and feature-level interpolation masks, generating more challenging mixed samples. These virtual samples enable DNNs to learn more robust features, ultimately reducing overfitting. Empirical evaluations on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet demonstrate that AdvMixUp outperforms existing MixUp variants.","PeriodicalId":13303,"journal":{"name":"IEEE transactions on neural networks and learning systems","volume":"36 9","pages":"16379-16391"},"PeriodicalIF":8.9000,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"AdvMixUp: Adversarial MixUp Regularization for Deep Learning\",\"authors\":\"Jun Fu;Xianrui Ji;Dexiong Chen;Guosheng Hu;Shuang Li;Xiating Feng\",\"doi\":\"10.1109/TNNLS.2025.3562363\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks (DNNs) have shown significant progress in many application fields. However, overfitting remains a significant challenge in their development. While existing data-augmentation techniques such as MixUp have been successful in preventing overfitting, they often fail to generate hard mixed samples near the decision boundary, impeding model optimization. In this article, we present adversarial MixUp (AdvMixUp), a novel sample-dependent method for regularizing DNNs. AdvMixUp addresses this issue by incorporating adversarial training (AT) to create sample-dependent and feature-level interpolation masks, generating more challenging mixed samples. These virtual samples enable DNNs to learn more robust features, ultimately reducing overfitting. Empirical evaluations on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet demonstrate that AdvMixUp outperforms existing MixUp variants.\",\"PeriodicalId\":13303,\"journal\":{\"name\":\"IEEE transactions on neural networks and learning systems\",\"volume\":\"36 9\",\"pages\":\"16379-16391\"},\"PeriodicalIF\":8.9000,\"publicationDate\":\"2025-03-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE transactions on neural networks and learning systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10988885/\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on neural networks and learning systems","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/10988885/","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
AdvMixUp: Adversarial MixUp Regularization for Deep Learning
Deep neural networks (DNNs) have shown significant progress in many application fields. However, overfitting remains a significant challenge in their development. While existing data-augmentation techniques such as MixUp have been successful in preventing overfitting, they often fail to generate hard mixed samples near the decision boundary, impeding model optimization. In this article, we present adversarial MixUp (AdvMixUp), a novel sample-dependent method for regularizing DNNs. AdvMixUp addresses this issue by incorporating adversarial training (AT) to create sample-dependent and feature-level interpolation masks, generating more challenging mixed samples. These virtual samples enable DNNs to learn more robust features, ultimately reducing overfitting. Empirical evaluations on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet demonstrate that AdvMixUp outperforms existing MixUp variants.
期刊介绍:
The focus of IEEE Transactions on Neural Networks and Learning Systems is to present scholarly articles discussing the theory, design, and applications of neural networks as well as other learning systems. The journal primarily highlights technical and scientific research in this domain.