Superpolynomial Lower Bounds for Learning Monotone Classes

N. Bshouty
{"title":"Superpolynomial Lower Bounds for Learning Monotone Classes","authors":"N. Bshouty","doi":"10.48550/arXiv.2301.08486","DOIUrl":null,"url":null,"abstract":"Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time $n^{\\tilde O(\\log\\log s)}$ for the classes of $n$-variable size-$s$ DNF, size-$s$ Decision Tree, and $\\log s$-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound $n^{\\Omega(\\log s)}$. This matches the best known upper bound for $n$-variable size-$s$ Decision Tree, and $\\log s$-Junta. In this paper, we give the same lower bounds for PAC-learning of $n$-variable size-$s$ Monotone DNF, size-$s$ Monotone Decision Tree, and Monotone $\\log s$-Junta by~DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time.","PeriodicalId":11639,"journal":{"name":"Electron. Colloquium Comput. Complex.","volume":"46 1","pages":"34:1-34:20"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Electron. Colloquium Comput. Complex.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.48550/arXiv.2301.08486","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Koch, Strassle, and Tan [SODA 2023], show that, under the randomized exponential time hypothesis, there is no distribution-free PAC-learning algorithm that runs in time $n^{\tilde O(\log\log s)}$ for the classes of $n$-variable size-$s$ DNF, size-$s$ Decision Tree, and $\log s$-Junta by DNF (that returns a DNF hypothesis). Assuming a natural conjecture on the hardness of set cover, they give the lower bound $n^{\Omega(\log s)}$. This matches the best known upper bound for $n$-variable size-$s$ Decision Tree, and $\log s$-Junta. In this paper, we give the same lower bounds for PAC-learning of $n$-variable size-$s$ Monotone DNF, size-$s$ Monotone Decision Tree, and Monotone $\log s$-Junta by~DNF. This solves the open problem proposed by Koch, Strassle, and Tan and subsumes the above results. The lower bound holds, even if the learner knows the distribution, can draw a sample according to the distribution in polynomial time, and can compute the target function on all the points of the support of the distribution in polynomial time.
单调类学习的超多项式下界
Koch, Strassle和Tan [SODA 2023]表明,在随机指数时间假设下,对于$n$ -可变大小- $s$ DNF,大小- $s$决策树和$\log s$ - junta的DNF类(返回DNF假设),没有无分布的pac学习算法可以在时间$n^{\tilde O(\log\log s)}$上运行。假设对集盖的硬度有一个自然的猜想,他们给出了下界$n^{\Omega(\log s)}$。这与最著名的$n$ -可变大小- $s$决策树和$\log s$ -军政府的上界相匹配。在本文中,我们给出了$n$ -可变大小- $s$单调DNF、大小- $s$单调决策树和单调$\log s$ - junta的相同下界。这解决了Koch, Strassle和Tan提出的开放性问题,并将上述结果纳入其中。下界成立,即使学习者知道分布,也可以在多项式时间内根据分布绘制样本,并在多项式时间内计算出支持分布的所有点上的目标函数。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信