{"title":"Hölder network for improved adversarial robustness","authors":"Dazhi Zhao , Haiyan Li , Qin Luo , Wenguang Hu","doi":"10.1016/j.neunet.2025.108145","DOIUrl":null,"url":null,"abstract":"<div><div>A small Lipschitz constant can help improve robustness and generalization by restricting the sensitivity of the model to input perturbations. However, overly aggressive constraints may also limit the network’s ability to approximate complex functions. In this paper, we propose the Hölder network, a novel architecture utilizing <span><math><mi>α</mi></math></span>-rectified power units (<span><math><mi>α</mi></math></span>-RePU). This framework generalizes Lipschitz-constrained networks by enforcing <span><math><mi>α</mi></math></span>-Hölder continuity. We theoretically prove that <span><math><mi>α</mi></math></span>-RePU networks are universal approximators of Hölder continuous functions, thereby offering greater flexibility than models with hard Lipschitz constraints. Empirical results show that the Hölder network achieves comparable accuracy and superior adversarial robustness against a wide range of attacks (e.g., PGD and <span><math><msub><mi>l</mi><mi>∞</mi></msub></math></span>) on both image classification and tabular data benchmarks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"194 ","pages":"Article 108145"},"PeriodicalIF":6.3000,"publicationDate":"2025-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neural Networks","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0893608025010251","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
A small Lipschitz constant can help improve robustness and generalization by restricting the sensitivity of the model to input perturbations. However, overly aggressive constraints may also limit the network’s ability to approximate complex functions. In this paper, we propose the Hölder network, a novel architecture utilizing -rectified power units (-RePU). This framework generalizes Lipschitz-constrained networks by enforcing -Hölder continuity. We theoretically prove that -RePU networks are universal approximators of Hölder continuous functions, thereby offering greater flexibility than models with hard Lipschitz constraints. Empirical results show that the Hölder network achieves comparable accuracy and superior adversarial robustness against a wide range of attacks (e.g., PGD and ) on both image classification and tabular data benchmarks.
期刊介绍:
Neural Networks is a platform that aims to foster an international community of scholars and practitioners interested in neural networks, deep learning, and other approaches to artificial intelligence and machine learning. Our journal invites submissions covering various aspects of neural networks research, from computational neuroscience and cognitive modeling to mathematical analyses and engineering applications. By providing a forum for interdisciplinary discussions between biology and technology, we aim to encourage the development of biologically-inspired artificial intelligence.