{"title":"Private least absolute deviations with heavy-tailed data","authors":"Di Wang , Jinhui Xu","doi":"10.1016/j.tcs.2025.115071","DOIUrl":null,"url":null,"abstract":"<div><div>We study the problem of Differentially Private Stochastic Convex Optimization (DPSCO) with heavy-tailed data. Specifically, we focus on the problem of Least Absolute Deviations, i.e., <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-norm linear regression, in the <em>ϵ</em>-DP model. While most previous work focuses on the case where the loss function is Lipschitz, in this paper we only need to assume the variates have bounded moments. Firstly, we study the case where the <span><math><msub><mrow><mi>ℓ</mi></mrow><mrow><mn>2</mn></mrow></msub></math></span> norm of data has a bounded second-order moment. We propose an algorithm that is based on the exponential mechanism and show that it is possible to achieve an upper bound of <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><msqrt><mrow><mfrac><mrow><mi>d</mi></mrow><mrow><mi>n</mi><mi>ϵ</mi></mrow></mfrac></mrow></msqrt><mo>)</mo></math></span> (with high probability). Next, we relax the assumption to bounded <em>θ</em>-th order moment with some <span><math><mi>θ</mi><mo>∈</mo><mo>(</mo><mn>1</mn><mo>,</mo><mn>2</mn><mo>)</mo></math></span> and show that it is possible to achieve an upper bound of <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><msup><mrow><mo>(</mo><mfrac><mrow><mi>d</mi></mrow><mrow><mi>n</mi><mi>ϵ</mi></mrow></mfrac><mo>)</mo></mrow><mrow><mfrac><mrow><mi>θ</mi><mo>−</mo><mn>1</mn></mrow><mrow><mi>θ</mi></mrow></mfrac></mrow></msup><mo>)</mo></math></span>. Our algorithms can also be extended to more relaxed cases where only each coordinate of the data has bounded moments, and we can get an upper bound of <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><mfrac><mrow><mi>d</mi></mrow><mrow><msqrt><mrow><mi>n</mi><mi>ϵ</mi></mrow></msqrt></mrow></mfrac><mo>)</mo></math></span> and <span><math><mover><mrow><mi>O</mi></mrow><mrow><mo>˜</mo></mrow></mover><mo>(</mo><mfrac><mrow><mi>d</mi></mrow><mrow><msup><mrow><mo>(</mo><mi>n</mi><mi>ϵ</mi><mo>)</mo></mrow><mrow><mfrac><mrow><mi>θ</mi><mo>−</mo><mn>1</mn></mrow><mrow><mi>θ</mi></mrow></mfrac></mrow></msup></mrow></mfrac><mo>)</mo></math></span> in the second and <em>θ</em>-th moment case respectively.</div></div>","PeriodicalId":49438,"journal":{"name":"Theoretical Computer Science","volume":"1030 ","pages":"Article 115071"},"PeriodicalIF":0.9000,"publicationDate":"2025-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theoretical Computer Science","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S030439752500009X","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, THEORY & METHODS","Score":null,"Total":0}
引用次数: 0
Abstract
We study the problem of Differentially Private Stochastic Convex Optimization (DPSCO) with heavy-tailed data. Specifically, we focus on the problem of Least Absolute Deviations, i.e., -norm linear regression, in the ϵ-DP model. While most previous work focuses on the case where the loss function is Lipschitz, in this paper we only need to assume the variates have bounded moments. Firstly, we study the case where the norm of data has a bounded second-order moment. We propose an algorithm that is based on the exponential mechanism and show that it is possible to achieve an upper bound of (with high probability). Next, we relax the assumption to bounded θ-th order moment with some and show that it is possible to achieve an upper bound of . Our algorithms can also be extended to more relaxed cases where only each coordinate of the data has bounded moments, and we can get an upper bound of and in the second and θ-th moment case respectively.
期刊介绍:
Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies. All papers introducing or studying mathematical, logic and formal concepts and methods are welcome, provided that their motivation is clearly drawn from the field of computing.