{"title":"Properly learning monotone functions via local correction","authors":"Jane Lange, R. Rubinfeld, A. Vasilyan","doi":"10.1109/FOCS54457.2022.00015","DOIUrl":null,"url":null,"abstract":"We give a $2^{\\tilde{O}(\\sqrt{n}/\\varepsilon)}$-time algorithm for properly learning monotone Boolean functions under the uniform distribution over $\\{0,1\\}^{n}$. Our algorithm is robust to adversarial label noise and has a running time nearly matching that of the state-of-the-art improper learning algorithm of Bshouty and Tamon (JACM 96) and an information-theoretic lower bound of Blais et al (RANDOM ’15). Prior to this work, no proper learning algorithm with running time smaller than $2^{\\Omega(n)}$ was known to exist. The core of our proper learner is a local computation algorithm for sorting binary labels on a poset. Our algorithm is built on a body of work on distributed greedy graph algorithms; specifically we rely on a recent work of Ghaffari (FOCS’22), which gives an efficient algorithm for computing maximal matchings in a graph in the LCA model of Rubinfeld et al and Alon et al (ICS’II, SODA’12). The applications of our local sorting algorithm extend beyond learning on the Boolean cube: we also give a tolerant tester for Boolean functions over general posets that distinguishes functions that are $\\varepsilon$/3-close to monotone from those that are $\\varepsilon-$far. Previous tolerant testers for the Boolean cube only distinguished between $\\varepsilon/\\Omega(\\sqrt{n}$)-close and $\\varepsilon-$far.","PeriodicalId":390222,"journal":{"name":"2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/FOCS54457.2022.00015","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We give a $2^{\tilde{O}(\sqrt{n}/\varepsilon)}$-time algorithm for properly learning monotone Boolean functions under the uniform distribution over $\{0,1\}^{n}$. Our algorithm is robust to adversarial label noise and has a running time nearly matching that of the state-of-the-art improper learning algorithm of Bshouty and Tamon (JACM 96) and an information-theoretic lower bound of Blais et al (RANDOM ’15). Prior to this work, no proper learning algorithm with running time smaller than $2^{\Omega(n)}$ was known to exist. The core of our proper learner is a local computation algorithm for sorting binary labels on a poset. Our algorithm is built on a body of work on distributed greedy graph algorithms; specifically we rely on a recent work of Ghaffari (FOCS’22), which gives an efficient algorithm for computing maximal matchings in a graph in the LCA model of Rubinfeld et al and Alon et al (ICS’II, SODA’12). The applications of our local sorting algorithm extend beyond learning on the Boolean cube: we also give a tolerant tester for Boolean functions over general posets that distinguishes functions that are $\varepsilon$/3-close to monotone from those that are $\varepsilon-$far. Previous tolerant testers for the Boolean cube only distinguished between $\varepsilon/\Omega(\sqrt{n}$)-close and $\varepsilon-$far.