{"title":"Steiner Shallow-Light Trees are Exponentially Lighter than Spanning Ones","authors":"Michael Elkin, Shay Solomon","doi":"10.1137/13094791X","DOIUrl":"https://doi.org/10.1137/13094791X","url":null,"abstract":"For a pair of parameters $alpha,beta ge 1$, a spanning tree $T$ of a weighted undirected $n$-vertex graph $G = (V,E,w)$ is called an emph{$(alpha,beta)$-shallow-light tree} (shortly, $(alpha,beta)$-SLT)of $G$ with respect to a designated vertex $rt in V$ if (1) it approximates all distances from $rt$ to the other vertices up to a factor of $alpha$, and(2) its weight is at most $beta$ times the weight of the minimum spanning tree $MST(G)$ of $G$. The parameter $alpha$ (respectively, $beta$) is called the emph{root-distortion}(resp., emph{lightness}) of the tree $T$. Shallow-light trees (SLTs) constitute a fundamental graph structure, with numerous theoretical and practical applications. In particular, they were used for constructing spanners, in network design, for VLSI-circuit design, for various data gathering and dissemination tasks in wireless and sensor networks, in overlay networks, and in the message-passing model of distributed computing. Tight tradeoffs between the parameters of SLTs were established by Awer buch et al. cite{ABP90, ABP91} and Khuller et al. cite{KRY93}. They showed that for any $epsilon >, 0$there always exist $(1+epsilon, O(frac{1}{epsilon}))$-SLTs, and that the upper bound $beta = O(frac{1}{epsilon})$on the lightness of SLTs cannot be improved. In this paper we show that using Steiner points one can build SLTs with emph{logarithmic lightness}, i.e., $beta = O(log frac{1}{epsilon})$. This establishes an emph{exponential separation} between spanning SLTs and Steiner ones. One particularly remarkable point on our tradeoff curve is $epsilon =0$. In this regime our construction provides a emph{shortest-path tree} with weight at most $O(log n) cdot w(MST(G))$. Moreover, we prove matching lower bounds that show that all our results are tight up to constant factors. Finally, on our way to these results we settle (up to constant factors) a number of open questions that were raised by Khuller et al. cite{KRY93} in SODA'93.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134398575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient Fully Homomorphic Encryption from (Standard) LWE","authors":"Zvika Brakerski, V. Vaikuntanathan","doi":"10.1109/FOCS.2011.12","DOIUrl":"https://doi.org/10.1109/FOCS.2011.12","url":null,"abstract":"We present a fully homomorphic encryption scheme that is based solely on the(standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of ``short vector problems'' on arbitrary lattices. Our construction improves on previous works in two aspects:begin{enumerate}item We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. item We deviate from the \"squashing paradigm'' used in all previous works. We introduce a new {em dimension-modulus reduction} technique, which shortens the cipher texts and reduces the decryption complexity of our scheme, {em without introducing additional assumptions}. end{enumerate}Our scheme has very short cipher texts and we therefore use it to construct an asymptotically efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k cdot polylog(k)+log dbs$ bits per single-bit query (here, $k$ is a security parameter).","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116092246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graph Connectivities, Network Coding, and Expander Graphs","authors":"Ho Yee Cheung, L. Lau, K. M. Leung","doi":"10.1137/110844970","DOIUrl":"https://doi.org/10.1137/110844970","url":null,"abstract":"We present a new algebraic formulation to compute edge connectivities in a directed graph, using the ideas developed in network coding. This reduces the problem of computing edge connectivities to solving systems of linear equations, thus allowing us to use tools in linear algebra to design new algorithms. Using the algebraic formulation we obtain faster algorithms for computing single source edge connectivities and all pairs edge connectivities, in some settings the amortized time to compute the edge connectivity for one pair is sub linear. Through this connection, we have also found an interesting use of expanders and super concentrators to design fast algorithms for some graph connectivity problems.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131447212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimal Testing of Multivariate Polynomials over Small Prime Fields","authors":"Elad Haramaty, Amir Shpilka, M. Sudan","doi":"10.1137/120879257","DOIUrl":"https://doi.org/10.1137/120879257","url":null,"abstract":"We consider the problem of testing if a given function $f : F_q^n right arrow F_q$ is close to a $n$-variate degree $d$ polynomial over the finite field $F_q$ of $q$elements. The natural, low-query, test for this property would be to pick the smallest dimension $t = t_{q,d}approx d/q$ such that every function of degree greater than $d$reveals this aspect on {em some} $t$-dimensional affine subspace of $F_q^n$ and to test that $f$ when restricted to a {em random} $t$-dimensional affine subspace is a polynomial of degree at most $d$ on this subspace. Such a test makes only $q^t$ queries, independent of $n$. Previous works, by Alon et al.~cite{AKKLR}, and Kaufman and Ron~cite{KaufmanRon06} and Jutla et al.~cite{JPRZ04}, showed that this natural test rejected functions that were$Omega(1)$-far from degree $d$-polynomials with probability at least $Omega(q^{-t})$. (The initial work~cite{AKKLR} considered only the case of $q=2$, while the work~cite{JPRZ04}only considered the case of prime $q$. The results in cite{KaufmanRon06} hold for all fields.) Thus to get a constant probability of detecting functions that are at constant distance from the space of degree $d$ polynomials, the tests made $q^{2t}$ queries. Kaufman and Ron also noted that when $q$ is prime, then $q^t$ queries are necessary. Thus these tests were off by at least a quadratic factor from known lower bounds. Bhattacharyya et al.~cite{BKSSZ10} gave an optimal analysis of this test for the case of the binary field and showed that the natural test actually rejects functions that were $Omega(1)$-far from degree $d$-polynomials with probability$Omega(1)$. In this work we extend this result for all fields showing that the natural test does indeed reject functions that are $Omega(1)$-far from degree $d$ polynomials with$Omega(1)$-probability, where the constants depend only on $q$ the field size. Thus our analysis thus shows that this test is optimal (matches known lower bounds) when $q$ is prime. The main technical ingredient in our work is a tight analysis of the number of ``hyper planes'' (affine subspaces of co-dimension $1$) on which the restriction of a degree $d$polynomial has degree less than $d$. We show that the number of such hyper planes is at most $O(q^{t_{q,d}})$ -- which is tight to within constant factors.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130847916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Randomness Complexity of Parallel Repetition","authors":"Kai-Min Chung, R. Pass","doi":"10.1109/FOCS.2011.93","DOIUrl":"https://doi.org/10.1109/FOCS.2011.93","url":null,"abstract":"Consider a $m$-round interactive protocol with soundness error $1/2$. How much extra randomness is required to decrease the soundness error to $delta$ through parallel repetition? Previous work, initiated by Bell are, Goldreich and Gold wasser, shows that for emph{public-coin} interactive protocols with emph{statistical soundness}, $m cdot O(log (1/delta))$ bits of extra randomness suffices. In this work, we initiate a more general study of the above question. begin{itemize}item We establish the first derandomized parallel repetition theorem for public-coin interactive protocols with emph{computational soundness} (a.k.a. arguments). The parameters of our result essentially matches the earlier works in the information-theoretic setting. item We show that obtaining even a sub-linear dependency on the number of rounds $m$ (i.e., $o(m) cdot log(1/delta)$) is impossible in the information-theoretic, and requires the existence of one-way functions in the computational setting. item We show that non-trivial derandomized parallel repetition for private-coin protocols is impossible in the information-theoretic setting and requires the existence of one-way functions in the computational setting. end{itemize} These results are tight in the sense that parallel repetition theorems in the computational setting can trivially be derandomized using pseudorandom generators, which are implied by the existence of one-way functions.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124434384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Bansal, U. Feige, Robert Krauthgamer, K. Makarychev, V. Nagarajan, J. Naor, Roy Schwartz
{"title":"Min-max Graph Partitioning and Small Set Expansion","authors":"N. Bansal, U. Feige, Robert Krauthgamer, K. Makarychev, V. Nagarajan, J. Naor, Roy Schwartz","doi":"10.1109/focs.2011.79","DOIUrl":"https://doi.org/10.1109/focs.2011.79","url":null,"abstract":"We study graph partitioning problems from a min-max perspective, in which an input graph on n vertices should be partitioned into k parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are: (i) the k parts need to be of equal size, and (ii) the parts must separate a set of k given terminals. We consider a common generalization of these two problems, and design for it an O(√log n log k)-approximation algorithm. This improves over an O(log2 n) approximation for the second version due to Svitkina and Tardos, and roughly O(k log n) approximation for the first version that follows from other previous work. We also give an improved O(1)-approximation algorithm for graphs that exclude any fixed minor. Our algorithm uses a new procedure for solving the Small Set Expansion problem. In this problem, we are given a graph G and the goal is to find a non-empty subset S of V of size at most pn with minimum edge-expansion. We give an O(√log n log (1/p)) bicriteria approximation algorithm for the general case of Small Set Expansion and O(1) approximation algorithm for graphs that exclude any fixed minor.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"91 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125265798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"(1 + eps)-Approximate Sparse Recovery","authors":"Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.92","DOIUrl":"https://doi.org/10.1109/FOCS.2011.92","url":null,"abstract":"The problem central to sparse recovery and compressive sensing is that of emph{stable sparse recovery}: we want a distribution $math cal{A}$ of matrices $A in R^{m times n}$ such that, for any $x in R^n$ and with probability $1 - delta >, 2/3$ over $A in math cal{A}$, there is an algorithm to recover $hat{x}$ from $Ax$ withbegin{align} norm{p}{hat{x} - x} leq C min_{ktext{-sparse } x'} norm{p}{x - x'}end{align}for some constant $C >, 1$ and norm $p$. The measurement complexity of this problem is well understood for constant $C >, 1$. However, in a variety of applications it is important to obtain $C = 1+eps$ for a small $eps >, 0$, and this complexity is not well understood. We resolve the dependence on $eps$ in the number of measurements required of a $k$-sparse recovery algorithm, up to polylogarithmic factors for the central cases of $p=1$ and $p=2$. Namely, we give new algorithms and lower bounds that show the number of measurements required is $k/eps^{p/2} textrm{polylog}(n)$. For $p=2$, our bound of $frac{1}{eps}klog (n/k)$ is tight up to emph{constant} factors. We also give matching bounds when the output is required to be $k$-sparse, in which case we achieve $k/eps^p textrm{polylog}(n)$. This shows the distinction between the complexity of sparse and non-sparse outputs is fundamental.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128543408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Power of Adaptivity in Sparse Recovery","authors":"P. Indyk, Eric Price, David P. Woodruff","doi":"10.1109/FOCS.2011.83","DOIUrl":"https://doi.org/10.1109/FOCS.2011.83","url":null,"abstract":"The goal of (stable) sparse recovery is to recover a $k$-sparse approximation $x^*$ of a vector $x$ from linear measurements of $x$. Specifically, the goal is to recover $x^*$ such that$$norm{p}{x-x^*} le C min_{ktext{-sparse } x'} norm{q}{x-x'}$$for some constant $C$ and norm parameters $p$ and $q$. It is known that, for $p=q=1$ or $p=q=2$, this task can be accomplished using $m=O(k log (n/k))$ {em non-adaptive}measurements~cite{CRT06:Stable-Signal} and that this bound is tight~cite{DIPW, FPRU, PW11}. In this paper we show that if one is allowed to perform measurements that are {em adaptive}, then the number of measurements can be considerably reduced. Specifically, for $C=1+epsilon$ and $p=q=2$ we showbegin{itemize}item A scheme with $m=O(frac{1}{eps}k log log (neps/k))$ measurements that uses $O(log^* k cdot log log (neps/k))$ rounds. This is a significant improvement over the best possible non-adaptive bound. item A scheme with $m=O(frac{1}{eps}k log (k/eps) + k log (n/k))$ measurements that uses {em two} rounds. This improves over the best possible non-adaptive bound. end{itemize} To the best of our knowledge, these are the first results of this type.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"287 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122205302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Polylogarithmic-Competitive Algorithm for the k-Server Problem","authors":"N. Bansal, Niv Buchbinder, A. Madry, J. Naor","doi":"10.1145/2783434","DOIUrl":"https://doi.org/10.1145/2783434","url":null,"abstract":"We give the first polylogarithmic-competitive randomized algorithm for the k-server problem on an arbitrary finite metric space. In particular, our algorithm achieves a competitive ratio of Õ(log3 n log2 k) for any metric space on n points. This improves upon the (2k-1)-competitive algorithm of Koutsoupias and Papadimitriou (J. ACM 1995) whenever n is sub-exponential in k.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123175847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lexicographic Products and the Power of Non-linear Network Coding","authors":"A. Błasiak, Robert D. Kleinberg, E. Lubetzky","doi":"10.1109/FOCS.2011.39","DOIUrl":"https://doi.org/10.1109/FOCS.2011.39","url":null,"abstract":"We introduce a technique for establishing and amplifying gaps between parameters of network coding and index coding problems. The technique uses linear programs to establish separations between combinatorial and coding-theoretic parameters and applies hyper graph lexicographic products to amplify these separations. This entails combining the dual solutions of the lexicographic multiplicands and proving that this is a valid dual solution of the product. Our result is general enough to apply to a large family of linear programs. This blend of linear programs and lexicographic products gives a recipe for constructing hard instances in which the gap between combinatorial or coding-theoretic parameters is polynomially large. We find polynomial gaps in cases in which the largest previously known gaps were only small constant factors or entirely unknown. Most notably, we show a polynomial separation between linear and non-linear network coding rates. This involves exploiting a connection between matroids and index coding to establish a previously unknown separation between linear and non-linear index coding rates. We also construct index coding problems with a polynomial gap between the broadcast rate and the trivial lower bound for which no gap was previously known.","PeriodicalId":326048,"journal":{"name":"2011 IEEE 52nd Annual Symposium on Foundations of Computer Science","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129748506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}