具有重尾数据的高维差分私有随机优化

Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang
{"title":"具有重尾数据的高维差分私有随机优化","authors":"Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang","doi":"10.1145/3517804.3524144","DOIUrl":null,"url":null,"abstract":"As one of the most fundamental problems in machine learning, statistics and differential privacy, Differentially Private Stochastic Convex Optimization (DP-SCO) has been extensively studied in recent years. However, most of the previous work can only handle either regular data distributions or irregular data in the low dimensional space case. To better understand the challenges arising from irregular data distributions, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space. In the first part we focus on the problem over some polytope constraint (such as the l1-norm ball). We show that if the loss function is smooth and its gradient has bounded second order moment, it is possible to get a (high probability) error bound (excess population risk) of Õ(log d/(nε)1/3) in the ε-DP model, where n is the sample size and d is the dimension of the underlying space. Next, for LASSO, if the data distribution has bounded fourth-order moments, we improve the bound to Õ(log d/(nε)2/5) in the $(ε, δ)-DP model. In the second part of the paper, we study sparse learning with heavy-tailed data. We first revisit the sparse linear model and propose a truncated DP-IHT method whose output could achieve an error of Õ ((s*2 log2d)/nε), where s* is the sparsity of the underlying parameter. Then we study a more general problem over the sparsity (i.e., l0-norm) constraint, and show that it is possible to achieve an error of Õ((s*3/2 log d)/nε), which is also near optimal up to a factor of Õ(√s*), if the loss function is smooth and strongly convex.","PeriodicalId":230606,"journal":{"name":"Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":"{\"title\":\"High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data\",\"authors\":\"Lijie Hu, Shuo Ni, Hanshen Xiao, Di Wang\",\"doi\":\"10.1145/3517804.3524144\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As one of the most fundamental problems in machine learning, statistics and differential privacy, Differentially Private Stochastic Convex Optimization (DP-SCO) has been extensively studied in recent years. However, most of the previous work can only handle either regular data distributions or irregular data in the low dimensional space case. To better understand the challenges arising from irregular data distributions, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space. In the first part we focus on the problem over some polytope constraint (such as the l1-norm ball). We show that if the loss function is smooth and its gradient has bounded second order moment, it is possible to get a (high probability) error bound (excess population risk) of Õ(log d/(nε)1/3) in the ε-DP model, where n is the sample size and d is the dimension of the underlying space. Next, for LASSO, if the data distribution has bounded fourth-order moments, we improve the bound to Õ(log d/(nε)2/5) in the $(ε, δ)-DP model. In the second part of the paper, we study sparse learning with heavy-tailed data. We first revisit the sparse linear model and propose a truncated DP-IHT method whose output could achieve an error of Õ ((s*2 log2d)/nε), where s* is the sparsity of the underlying parameter. Then we study a more general problem over the sparsity (i.e., l0-norm) constraint, and show that it is possible to achieve an error of Õ((s*3/2 log d)/nε), which is also near optimal up to a factor of Õ(√s*), if the loss function is smooth and strongly convex.\",\"PeriodicalId\":230606,\"journal\":{\"name\":\"Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems\",\"volume\":\"21 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-07-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"29\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3517804.3524144\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 41st ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3517804.3524144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29

摘要

差分私有随机凸优化(differential Private random Convex Optimization, DP-SCO)作为机器学习、统计学和差分隐私中最基本的问题之一,近年来得到了广泛的研究。然而,以往的工作大多只能处理低维空间情况下的规则数据分布或不规则数据。为了更好地理解不规则数据分布带来的挑战,本文首次对高维空间中重尾数据的DP-SCO问题进行了研究。在第一部分中,我们重点讨论了一些多面体约束(如11范数球)上的问题。我们证明,如果损失函数是光滑的,其梯度具有有界的二阶矩,则在ε-DP模型中可能得到(高概率)误差界(超额总体风险)为Õ(log d/(nε)1/3),其中n为样本量,d为底层空间的维数。接下来,对于LASSO,如果数据分布具有有界的四阶矩,我们将$(ε, δ)-DP模型中的界改进为Õ(log d/(nε)2/5)。在论文的第二部分,我们研究了重尾数据的稀疏学习。我们首先重新研究了稀疏线性模型,并提出了一种截断的DP-IHT方法,其输出误差可以达到Õ ((s*2 log2d)/nε),其中s*是底层参数的稀疏性。然后,我们研究了稀疏性(即10范数)约束下的一个更一般的问题,并表明如果损失函数是光滑且强凸的,则可能实现Õ((s*3/2 log d)/nε)的误差,这也是接近最优的Õ(√s*)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
High Dimensional Differentially Private Stochastic Optimization with Heavy-tailed Data
As one of the most fundamental problems in machine learning, statistics and differential privacy, Differentially Private Stochastic Convex Optimization (DP-SCO) has been extensively studied in recent years. However, most of the previous work can only handle either regular data distributions or irregular data in the low dimensional space case. To better understand the challenges arising from irregular data distributions, in this paper we provide the first study on the problem of DP-SCO with heavy-tailed data in the high dimensional space. In the first part we focus on the problem over some polytope constraint (such as the l1-norm ball). We show that if the loss function is smooth and its gradient has bounded second order moment, it is possible to get a (high probability) error bound (excess population risk) of Õ(log d/(nε)1/3) in the ε-DP model, where n is the sample size and d is the dimension of the underlying space. Next, for LASSO, if the data distribution has bounded fourth-order moments, we improve the bound to Õ(log d/(nε)2/5) in the $(ε, δ)-DP model. In the second part of the paper, we study sparse learning with heavy-tailed data. We first revisit the sparse linear model and propose a truncated DP-IHT method whose output could achieve an error of Õ ((s*2 log2d)/nε), where s* is the sparsity of the underlying parameter. Then we study a more general problem over the sparsity (i.e., l0-norm) constraint, and show that it is possible to achieve an error of Õ((s*3/2 log d)/nε), which is also near optimal up to a factor of Õ(√s*), if the loss function is smooth and strongly convex.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信