{"title":"The Monotone Complexity of k-clique on Random Graphs","authors":"Benjamin Rossman","doi":"10.1137/110839059","DOIUrl":"https://doi.org/10.1137/110839059","url":null,"abstract":"It is widely suspected that ErdH{o}s-R'enyi random graphs are a source of hard instances for clique problems. Giving further evidence for this belief, we prove the first average-case hardness result for the $k$-clique problem on monotone circuits. Specifically, we show that no monotone circuit of size $O(n^{k/4})$ solves the $k$-clique problem with high probability on $ER(n,p)$ for two sufficiently far-apart threshold functions $p(n)$ (for instance $n^{-2/(k-1)}$ and $2n^{-2/(k-1)}$). Moreover, the exponent $k/4$ in this result is tight up to an additive constant. One technical contribution of this paper is the introduction of {em quasi-sunflowers}, a new relaxation of sunflowers in which petals may overlap slightly on average. A ``quasi-sunflower lemma'' (`a la the ErdH{o}s-Rado sunflower lemma) leads to our novel lower bounds within Razborov's method of approximations.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114252174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Local List Decoding with a Constant Number of Queries","authors":"Avraham Ben-Aroya, K. Efremenko, A. Ta-Shma","doi":"10.1109/FOCS.2010.88","DOIUrl":"https://doi.org/10.1109/FOCS.2010.88","url":null,"abstract":"Recently Efremenko showed locally-decodable codes of sub-exponential length. That result showed that these codes can handle up to $frac{1}{3} $ fraction of errors. In this paper we show that the same codes can be locally unique-decoded from error rate $half-alpha$ for any $alpha>0$ and locally list-decoded from error rate $1-alpha$ for any $alpha>0$, with only a constant number of queries and a constant alphabet size. This gives the first sub-exponential codes that can be locally list-decoded with a constant number of queries.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114507447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Sub-exponential Upper Bound for On-Line Chain Partitioning","authors":"B. Bosek, Tomasz Krawczyk","doi":"10.1109/FOCS.2010.40","DOIUrl":"https://doi.org/10.1109/FOCS.2010.40","url":null,"abstract":"The main question in the on-line chain partitioning problem is to determine whether there exists an algorithm that partitions on-line posets of width at most $w$ into polynomial number of chains – see Trotter's chapter Partially ordered sets in the Handbook of Combinatorics. So far the best known on-line algorithm of Kier stead used at most $(5^w-1)/4$ chains, on the other hand Szemer'{e}di proved that any on-line algorithm requires at least $binom{w+1}{2}$ chains. These results were obtained in the early eighties and since then no progress in the general case has been done. We provide an on-line algorithm that partitions orders of width $w$ into at most $w^{16log{w}}$ chains. This yields the first sub-exponential upper bound for on-line chain partitioning problem.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128385628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning Convex Concepts from Gaussian Distributions with PCA","authors":"S. Vempala","doi":"10.1109/FOCS.2010.19","DOIUrl":"https://doi.org/10.1109/FOCS.2010.19","url":null,"abstract":"We present a new algorithm for learning a convex set in $n$-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed polynomial in $n$ times a function of $k$ and $eps$ where $k$ is the dimension of the {em normal subspace} (the span of normal vectors to supporting hyper planes of the convex set) and the output is a hypothesis that correctly classifies at least $1-eps$ of the unknown Gaussian distribution. For the important case when the convex set is the intersection of $k$ half spaces, the complexity is [ poly(n,k,1/eps) + n cdot min , k^{O(log k/eps^4)}, (k/eps)^{O(k)}, ] improving substantially on the state of the art cite{Vem04,KOS08} for Gaussian distributions. The key step of the algorithm is a Singular Value Decomposition after applying a normalization. The proof is based on a monotonicity property of Gaussian space under convex restrictions.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126253615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Replacement Paths via Fast Matrix Multiplication","authors":"O. Weimann, R. Yuster","doi":"10.1109/FOCS.2010.68","DOIUrl":"https://doi.org/10.1109/FOCS.2010.68","url":null,"abstract":"Let G be a directed edge-weighted graph and let P be a shortest path from s to t in G. The replacement paths problem asks to compute, for every edge e on P, the shortest s-to-t path that avoids e. Apart from approximation algorithms and algorithms for special graph classes, the naive solution to this problem – removing each edge e on P one at a time and computing the shortest s-to-t path each time – is surprisingly the only known solution for directed weighted graphs, even when the weights are integrals. In particular, although the related shortest paths problem has benefited from fast matrix multiplication, the replacement paths problem has not, and still required cubic time. For an n-vertex graph with integral edge-lengths between -M and M, we give a randomized algorithm that uses fast matrix multiplication and is sub-cubic for appropriate values of M. We also show how to construct a distance sensitivity oracle in the same time bounds. A query (u,v,e) to this oracle requires sub-quadratic time and returns the length of the shortest u-to-v path that avoids the edge e. In fact, for any constant number of edge failures, we construct a data structure in sub-cubic time, that answer queries in sub-quadratic time. Our results also apply for avoiding vertices rather than edges.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131091573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating the Longest Increasing Sequence in Polylogarithmic Time","authors":"M. Saks, C. Seshadhri","doi":"10.1137/130942152","DOIUrl":"https://doi.org/10.1137/130942152","url":null,"abstract":"Finding the length of the longest increasing subsequence (LIS) is a classic algorithmic problem. Let $n$ denote the size of the array. Simple O(n log n) time algorithms are known that determine the LIS exactly. In this paper, we develop a randomized approximation algorithm, that for any constant delta > 0, runs in time polylogarithmic in n and estimates the length of the LIS of an array up to an additive error of (delta n). The algorithm presented in this extended abstract runs in time (log n)^{O(1/delta)}. In the full paper, we will give an improved version of the algorithm with running time (log n)^c (1/delta)^{O(1/delta)} where the exponent c is independent of delta. Previously, the best known polylogarithmic time algorithms could only achieve an additive n/2-approximation. Our techniques also yield a fast algorithm for estimating the distance to monotonicity to within a small multiplicative factor. The distance of f to monotonicity, eps_f, is equal to 1 - |LIS|/n (the fractional length of the complement of the LIS). For any delta > 0, we give an algorithm with running time O((eps^{-1}_f log n)^{O(1/delta)}) that outputs a (1+delta)-multiplicative approximation to eps_f. This can be improved so that the exponent is a fixed constant. The previously known polylogarithmic algorithms gave only a 2-approximation.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132156299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Testing Properties of Sparse Images","authors":"D. Ron, Gilad Tsur","doi":"10.1145/2635806","DOIUrl":"https://doi.org/10.1145/2635806","url":null,"abstract":"We initiate the study of testing properties of images that correspond to sparse 0/1-valued matrices of size n × n. Our study is related to but different from the study initiated by Raskhodnikova (Proceedings of RANDOM, 2003), where the images correspond to dense 0/1-valued matrices. Specifically, while distance between images in the model studied by Raskhodnikova is the fraction of entries on which the images differ taken with respect to all n^2 entries, the distance measure in our model is defined by the fraction of such entries taken with respect to the actual number of 1’s in the matrix. We study several natural properties: connectivity, convexity, monotonicity, and being a line. In all cases we give testing algorithms with sub linear complexity, and in some of the cases we also provide corresponding lower bounds.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121420110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Coin Problem and Pseudorandomness for Branching Programs","authors":"Joshua Brody, Elad Verbin","doi":"10.1109/FOCS.2010.10","DOIUrl":"https://doi.org/10.1109/FOCS.2010.10","url":null,"abstract":"The emph{Coin Problem} is the following problem: a coin is given, which lands on head with probability either $1/2 + beta$ or $1/2 - beta$. We are given the outcome of $n$ independent tosses of this coin, and the goal is to guess which way the coin is biased, and to answer correctly with probability $ge 2/3$. When our computational model is unrestricted, the majority function is optimal, and succeeds when $beta ge c /sqrt{n}$ for a large enough constant $c$. The coin problem is open and interesting in models that cannot compute the majority function. In this paper we study the coin problem in the model of emph{read-once width-$w$ branching programs}. We prove that in order to succeed in this model, $beta$ must be at least $1/ (log n)^{Theta(w)}$. For constant $w$ this is tight by considering the recursive tribes function, and for other values of $w$ this is nearly tight by considering other read-once AND-OR trees. We generalize this to a emph{Dice Problem}, where instead of independent tosses of a coin we are given independent tosses of one of two $m$-sided dice. We prove that if the distributions are too close and the mass of each side of the dice is not too small, then the dice cannot be distinguished by small-width read-once branching programs. We suggest one application for this kind of theorems: we prove that Nisan's Generator fools width-$w$ read-once emph{regular} branching programs, using seed length $O(w^4 log n log log n + log n log (1/eps))$. For $w=eps=Theta(1)$, this seed length is $O(log n log log n)$. The coin theorem and its relatives might have other connections to PRGs. This application is related to the independent, but chronologically-earlier, work of Braver man, Rao, Raz and Yehudayoff~cite{BRRY}.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"1027 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116258373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Subcubic Equivalences between Path, Matrix and Triangle Problems","authors":"V. V. Williams, Ryan Williams","doi":"10.1145/3186893","DOIUrl":"https://doi.org/10.1145/3186893","url":null,"abstract":"We say an algorithm on n by n matrices with entries in [-M, M] (or n-node graphs with edge weights from [-M, M]) is truly sub cubic if it runs in O(n^{3-delta} poly(log M)) time for some delta > 0. We define a notion of sub cubic reducibility, and show that many important problems on graphs and matrices solvable in O(n^3) time are equivalent under sub cubic reductions. Namely, the following weighted problems either all have truly sub cubic algorithms, or none of them do: - The all-pairs shortest paths problem (APSP). - Detecting if a weighted graph has a triangle of negative total edge weight. - Listing up to n^{2.99} negative triangles in an edge-weighted graph. - Finding a minimum weight cycle in a graph of non-negative edge weights. - The replacement paths problem in an edge-weighted digraph. - Finding the second shortest simple path between two nodes in an edge-weighted digraph. - Checking whether a given matrix defines a metric. - Verifying the correctness of a matrix product over the (min, +)-semiring. Therefore, if APSP cannot be solved in n^{3-eps} time for any eps > 0, then many other problems also need essentially cubic time. In fact we show generic equivalences between matrix products over a large class of algebraic structures used in optimization, verifying a matrix product over the same structure, and corresponding triangle detection problems over the structure. These equivalences simplify prior work on sub cubic algorithms for all-pairs path problems, since it now suffices to give appropriate sub cubic triangle detection algorithms. Other consequences of our work are new combinatorial approaches to Boolean matrix multiplication over the (OR, AND)-semiring (abbreviated as BMM). We show that practical advances in triangle detection would imply practical BMM algorithms, among other results. Building on our techniques, we give two new BMM algorithms: a derandomization of the recent combinatorial BMM algorithm of Bansal and Williams (FOCS'09), and an improved quantum algorithm for BMM.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133440712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pure and Bayes-Nash Price of Anarchy for Generalized Second Price Auction","authors":"R. Leme, É. Tardos","doi":"10.1109/FOCS.2010.75","DOIUrl":"https://doi.org/10.1109/FOCS.2010.75","url":null,"abstract":"The Generalized Second Price Auction has been the main mechanism used by search companies to auction positions for advertisements on search pages. In this paper we study the social welfare of the Nash equilibria of this game in various models. In the full information setting, socially optimal Nash equilibria are known to exist (i.e., the Price of Stability is 1). This paper is the first to prove bounds on the price of anarchy, and to give any bounds in the Bayesian setting. Our main result is to show that the price of anarchy is small assuming that all bidders play un-dominated strategies. In the full information setting we prove a bound of 1.618 for the price of anarchy for pure Nash equilibria, and a bound of 4 for mixed Nash equilibria. We also prove a bound of 8 for the price of anarchy in the Bayesian setting, when valuations are drawn independently, and the valuation is known only to the bidder and only the distributions used are common knowledge. Our proof exhibits a combinatorial structure of Nash equilibria and uses this structure to bound the price of anarchy. While establishing the structure is simple in the case of pure and mixed Nash equilibria, the extension to the Bayesian setting requires the use of novel combinatorial techniques that can be of independent interest.","PeriodicalId":228365,"journal":{"name":"2010 IEEE 51st Annual Symposium on Foundations of Computer Science","volume":"351 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115971994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}