Efficient density estimation via piecewise polynomial approximation

Siu On Chan, Ilias Diakonikolas, R. Servedio, Xiaorui Sun
{"title":"Efficient density estimation via piecewise polynomial approximation","authors":"Siu On Chan, Ilias Diakonikolas, R. Servedio, Xiaorui Sun","doi":"10.1145/2591796.2591848","DOIUrl":null,"url":null,"abstract":"We give a computationally efficient semi-agnostic algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let p be an arbitrary distribution over an interval I, and suppose that p is τ-close (in total variation distance) to an unknown probability distribution q that is defined by an unknown partition of I into t intervals and t unknown degree d polynomials specifying q over each of the intervals. We give an algorithm that draws Õ(t(d + 1)/ε2) samples from p, runs in time poly(t, d + 1, 1/ε), and with high probability outputs a piecewise polynomial hypothesis distribution h that is (14τ + ε)-close to p in total variation distance. Our algorithm combines tools from real approximation theory, uniform convergence, linear programming, and dynamic programming. Its sample complexity is simultaneously near optimal in all three parameters t, d and ε; we show that even for τ = 0, any algorithm that learns an unknown t-piecewise degree-d probability distribution over I to accuracy ε must use [EQUATION] samples from the distribution, regardless of its running time. We apply this general algorithm to obtain a wide range of results for many natural density estimation problems over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of t-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of k-monotone densities. Our general technique gives improved results, with provably optimal sample complexities (up to logarithmic factors) in all parameters in most cases, for all these problems via a single unified algorithm.","PeriodicalId":123501,"journal":{"name":"Proceedings of the forty-sixth annual ACM symposium on Theory of computing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"117","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the forty-sixth annual ACM symposium on Theory of computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2591796.2591848","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 117

Abstract

We give a computationally efficient semi-agnostic algorithm for learning univariate probability distributions that are well approximated by piecewise polynomial density functions. Let p be an arbitrary distribution over an interval I, and suppose that p is τ-close (in total variation distance) to an unknown probability distribution q that is defined by an unknown partition of I into t intervals and t unknown degree d polynomials specifying q over each of the intervals. We give an algorithm that draws Õ(t(d + 1)/ε2) samples from p, runs in time poly(t, d + 1, 1/ε), and with high probability outputs a piecewise polynomial hypothesis distribution h that is (14τ + ε)-close to p in total variation distance. Our algorithm combines tools from real approximation theory, uniform convergence, linear programming, and dynamic programming. Its sample complexity is simultaneously near optimal in all three parameters t, d and ε; we show that even for τ = 0, any algorithm that learns an unknown t-piecewise degree-d probability distribution over I to accuracy ε must use [EQUATION] samples from the distribution, regardless of its running time. We apply this general algorithm to obtain a wide range of results for many natural density estimation problems over both continuous and discrete domains. These include state-of-the-art results for learning mixtures of log-concave distributions; mixtures of t-modal distributions; mixtures of Monotone Hazard Rate distributions; mixtures of Poisson Binomial Distributions; mixtures of Gaussians; and mixtures of k-monotone densities. Our general technique gives improved results, with provably optimal sample complexities (up to logarithmic factors) in all parameters in most cases, for all these problems via a single unified algorithm.
通过分段多项式近似的高效密度估计
我们给出了一种计算效率高的半不可知论算法,用于学习单变量概率分布,该分布可以很好地由分段多项式密度函数近似。设p是区间I上的任意分布,并假设p是τ-接近(总变化距离)一个未知的概率分布q,该分布q是由未知的I划分为t个区间和t个未知的d次多项式定义的,在每个区间上指定q。我们给出了一种算法,该算法从p中提取Õ(t(d + 1)/ε2)个样本,在时间poly(t, d + 1,1 /ε)中运行,并以高概率输出一个分段多项式假设分布h,其总变异距离为(14τ + ε)-接近p。我们的算法结合了实逼近理论、一致收敛、线性规划和动态规划的工具。其样本复杂度在t、d和ε三个参数下同时接近最优;我们表明,即使对于τ = 0,任何学习未知的t分段概率分布到精度为ε的概率分布的算法,无论其运行时间如何,都必须使用分布中的样本。我们将这种通用算法应用于许多连续和离散域上的自然密度估计问题,得到了广泛的结果。其中包括最先进的学习对数凹分布混合的结果;t-模态分布的混合;单调危险率分布的混合;泊松二项分布的混合;高斯函数的混合;以及k-单调密度的混合物。我们的一般技术给出了改进的结果,在大多数情况下,通过一个统一的算法,所有这些问题的所有参数都具有可证明的最佳样本复杂性(高达对数因子)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信