{"title":"Generalization analysis of adversarial pairwise learning.","authors":"Wen Wen, Han Li, Rui Wu, Lingjuan Wu, Hong Chen","doi":"10.1016/j.neunet.2024.106955","DOIUrl":null,"url":null,"abstract":"<p><p>Adversarial pairwise learning has become the predominant method to enhance the discrimination ability of models against adversarial attacks, achieving tremendous success in various application fields. Despite excellent empirical performance, adversarial robustness and generalization of adversarial pairwise learning remain poorly understood from the theoretical perspective. This paper moves towards this by establishing the high-probability generalization bounds. Our bounds generally apply to various models and pairwise learning tasks. We give application examples involving explicit bounds of adversarial bipartite ranking and adversarial metric learning to illustrate how the theoretical results can be extended. Furthermore, we develop the optimistic generalization bound at order O(n<sup>-1</sup>) on the sample size n by leveraging local Rademacher complexity. Our analysis provides meaningful theoretical guidance for improving adversarial robustness through feature size and regularization. Experimental results validate theoretical findings.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"106955"},"PeriodicalIF":6.0000,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1016/j.neunet.2024.106955","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Adversarial pairwise learning has become the predominant method to enhance the discrimination ability of models against adversarial attacks, achieving tremendous success in various application fields. Despite excellent empirical performance, adversarial robustness and generalization of adversarial pairwise learning remain poorly understood from the theoretical perspective. This paper moves towards this by establishing the high-probability generalization bounds. Our bounds generally apply to various models and pairwise learning tasks. We give application examples involving explicit bounds of adversarial bipartite ranking and adversarial metric learning to illustrate how the theoretical results can be extended. Furthermore, we develop the optimistic generalization bound at order O(n-1) on the sample size n by leveraging local Rademacher complexity. Our analysis provides meaningful theoretical guidance for improving adversarial robustness through feature size and regularization. Experimental results validate theoretical findings.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.