{"title":"Adversarial Robustness of Sparse Local Lipschitz Predictors","authors":"Ramchandran Muthukumar, Jeremias Sulam","doi":"10.1137/22m1478835","DOIUrl":null,"url":null,"abstract":"This work studies the adversarial robustness of parametric functions composed of a linear predictor and a nonlinear representation map. Our analysis relies on sparse local Lipschitzness (SLL), an extension of local Lipschitz continuity that better captures the stability and reduced effective dimensionality of predictors upon local perturbations. SLL functions preserve a certain degree of structure, given by the sparsity pattern in the representation map, and include several popular hypothesis classes, such as piecewise linear models, Lasso and its variants, and deep feedforward ReLU networks. Compared with traditional Lipschitz analysis, we provide a tighter robustness certificate on the minimal energy of an adversarial example, as well as tighter data-dependent nonuniform bounds on the robust generalization error of these predictors. We instantiate these results for the case of deep neural networks and provide numerical evidence that supports our results, shedding new insights into natural regularization strategies to increase the robustness of these models.","PeriodicalId":74797,"journal":{"name":"SIAM journal on mathematics of data science","volume":"971 ","pages":"0"},"PeriodicalIF":1.9000,"publicationDate":"2023-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"SIAM journal on mathematics of data science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1137/22m1478835","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"MATHEMATICS, APPLIED","Score":null,"Total":0}
引用次数: 8
Abstract
This work studies the adversarial robustness of parametric functions composed of a linear predictor and a nonlinear representation map. Our analysis relies on sparse local Lipschitzness (SLL), an extension of local Lipschitz continuity that better captures the stability and reduced effective dimensionality of predictors upon local perturbations. SLL functions preserve a certain degree of structure, given by the sparsity pattern in the representation map, and include several popular hypothesis classes, such as piecewise linear models, Lasso and its variants, and deep feedforward ReLU networks. Compared with traditional Lipschitz analysis, we provide a tighter robustness certificate on the minimal energy of an adversarial example, as well as tighter data-dependent nonuniform bounds on the robust generalization error of these predictors. We instantiate these results for the case of deep neural networks and provide numerical evidence that supports our results, shedding new insights into natural regularization strategies to increase the robustness of these models.