{"title":"Communication Complexity with Synchronized Clocks","authors":"R. Impagliazzo, Ryan Williams","doi":"10.1109/CCC.2010.32","DOIUrl":"https://doi.org/10.1109/CCC.2010.32","url":null,"abstract":"We consider two natural extensions of the communication complexity model that are inspired by distributed computing. In both models, two parties are equipped with synchronized discrete clocks, and we assume that a bit can be sent from one party to another in one step of time. Both models allow implicit communication, by allowing the parties to choose whether to send a bit during each step. We examine trade-offs between time (total number of possible time steps elapsed) and communication (total number of bits actually sent). In the synchronized bit model, we measure the total number of bits sent between the two parties (e.g., email). We show that, in this model, communication costs can differ from the usual communication complexity by a factor roughly logarithmic in the number of time steps, and no more than such a factor. In the synchronized connection model, both parties choose whether or not to open their end of the communication channel at each time step. An exchange of bits takes place only when both ends of the channel are open (e.g., instant messaging), in which case we say that a {em connection} has occurred. If a party does not open its end, it does not learn whether the other party opened its channel. When we restrict the number of time steps to be polynomial in the input length, and the number of connections to be polylogarithmic in the input length, the class of problems solved with this model turns out to be roughly equivalent to the communication complexity analogue of P^{NP}. Using our new model, we give what we believe to be the first lower bounds for this class, separating P^{NP} from Sigma_2 intersect Pi_2 in the communication complexity setting. Although these models are both quite natural, they have unexpected power, and lead to a refinement of problem classifications in communication complexity.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133866123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the Matching Problem for Special Graph Classes","authors":"T. Hoang","doi":"10.1109/CCC.2010.21","DOIUrl":"https://doi.org/10.1109/CCC.2010.21","url":null,"abstract":"An even cycle in a graph is called {em nice} by Lov{'a}sz and Plummer in [LP86] if the graph obtained by deleting all vertices of the cycle has some perfect matching. In the present paper we prove some new complexity bounds for various versions of problems related to perfect matchings in graphs with a polynomially bounded number of nice cycles. We show that for graphs with a polynomially bounded number of nice cycles the perfect matching decision problem is in $SPL$, it is hard for $FewL$, and the perfect matching construction problem is in $L^{C_=L} cap oplus L$. Furthermore, we significantly improve the best known upper bounds, proved by Agrawal, Hoang, and Thierauf in the STACS'07-paper [AHT07], for the polynomially bounded perfect matching problem by showing that the construction and the counting versions are in $C_=L cap oplus L$ and in $C_=L$, respectively. Note that $SPL, oplus L, C_=L, $ and $L^{C_=L}$ are contained in $NC^2$. Moreover, we show that the problem of computing a maximum matching for bipartite planar graphs is in $L^{C_=L}$. This solves Open Question 4.7 stated in the STACS'08-paper by Datta, Kulkarni, and Roy [DKR08] where it is asked whether computing a maximum matching even for bipartite planar graphs can be done in $NC$. We also show that the problem of computing a maximum matching for graphs with a polynomially bounded number of even cycles is in $L^{C_=L}$.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132482621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Learning with Errors Problem (Invited Survey)","authors":"O. Regev","doi":"10.1109/CCC.2010.26","DOIUrl":"https://doi.org/10.1109/CCC.2010.26","url":null,"abstract":"In this survey we describe the Learning with Errors (LWE) problem, discuss its properties, its hardness, and its cryptographic applications.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122887920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exact Threshold Circuits","authors":"Kristoffer Arnsfelt Hansen, V. Podolskii","doi":"10.1109/CCC.2010.33","DOIUrl":"https://doi.org/10.1109/CCC.2010.33","url":null,"abstract":"We initiate a systematic study of constant depth Boolean circuits built using exact threshold gates. We consider both unweighted and weighted exact threshold gates and introduce corresponding circuit classes. We next show that this gives a hierarchy of classes that seamlessly interleave with the well-studied corresponding hierarchies defined using ordinary threshold gates. A major open problem in Boolean circuit complexity is to provide an explicit super-polynomial lower bound for depth two threshold circuits. We identify the class of depth two exact threshold circuits as a natural subclass of these where also no explicit lower bounds are known. Many of our results can be seen as evidence that this class is a strict subclass of depth two threshold circuits --- thus we argue that efforts in proving lower bounds should be directed towards this class.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123406586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Program-Enumeration Bottleneck in Average-Case Complexity Theory","authors":"L. Trevisan","doi":"10.1109/CCC.2010.18","DOIUrl":"https://doi.org/10.1109/CCC.2010.18","url":null,"abstract":"Three fundamental results of Levin involve algorithms or reductions whose running time is exponential in the length of certain programs. We study the question of whether such dependency can be made polynomial. begin{enumerate} item Levin's ``optimal search algorithm'' performs at most a constant factor more slowly than any other fixed algorithm. The constant, however, is exponential in the length of the competing algorithm. We note that the running time of a universal search cannot be made ``fully polynomial'' (that is, the relation between slowdown and program length cannot be made polynomial), unless P=NP. item Levin's ``universal one-way function'' result has the following structure: there is a polynomial time computable function $f_{rm Levin}$ such that if there is a polynomial time computable adversary $A$ that inverts $f_{rm Levin}$ on an inverse polynomial fraction of inputs, then for every polynomial time computable function $g$ there also is a polynomial time adversary $A_g$ that inverts $g$ on an inverse polynomial fraction of inputs. Unfortunately, again the running time of $A_g$ depends exponentially on the bit length of the program that computes $g$ in polynomial time. We show that a fully polynomial uniform reduction from an arbitrary one-way function to a specific one-way function is not possible relative to an oracle that we construct, and so no ``universal one-way function'' can have a fully polynomial security analysis via relativizing techniques. item Levin's completeness result for distributional NP problems implies that if a specific problem in NP is easy on average under the uniform distribution, then every language $L$ in NP is also easy on average under any polynomial time computable distribution. The running time of the implied algorithm for $L$, however, depends exponentially on the bit length of the non-deterministic polynomial time Turing machine that decides $L$. We show that if a completeness result for distributional NP can be proved via a ``fully uniform'' and ``fully polynomial'' time reduction, then there is a worst-case to average-case reduction for NP-complete problems. In particular, this means that a fully polynomial completeness result for distributional NP is impossible, even via randomized truth-table reductions, unless the polynomial hierarchy collapses. end{enumerate}","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"21 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123652296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lower Bounds for Testing Function Isomorphism","authors":"Eric Blais, R. O'Donnell","doi":"10.1109/CCC.2010.30","DOIUrl":"https://doi.org/10.1109/CCC.2010.30","url":null,"abstract":"We prove new lower bounds in the area of property testing of boolean functions. Specifically, we study the problem of testing whether a boolean function $f$ is isomorphic to a fixed function $g$ (i.e., is equal to $g$ up to permutation of the input variables). The analogous problem for testing graphs was solved by Fischer in 2005. The setting of boolean functions, however, appears to be more difficult, and no progress has been made since the initial study of the problem by Fischer et al. in 2004. Our first result shows that any non-adaptive algorithm for testing isomorphism to a function that ``strongly'' depends on $k$ variables requires $log k - O(1)$ queries (assuming $k/n$ is bounded away from 1). This lower bound affirms and strengthens a conjecture appearing in the 2004 work of Fischer et al. Its proof relies on total variation bounds between hypergeometric distributions which may be of independent interest. Our second result concerns the simplest interesting case not covered by our first result: non-adaptively testing isomorphism to the Majority function on $k$ variables. Here we show that $Omega(k^{1/12})$ queries are necessary (again assuming $k/n$ is bounded away from 1). The proof of this result relies on recently developed multidimensional invariance principle tools.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115515783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Symmetry Coincides with Nondeterminism for Time-Bounded Auxiliary Pushdown Automata","authors":"E. Allender, Klaus-Jörn Lange","doi":"10.4086/toc.2014.v010a008","DOIUrl":"https://doi.org/10.4086/toc.2014.v010a008","url":null,"abstract":"We show that every language accepted by a nondeterministic auxiliary pushdown automaton in polynomial time (that is, every language in SAC^1 = Log(CFL)) can be accepted by a symmetric auxiliary pushdown automaton in polynomial time.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130522870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Log-Space Algorithm for Reachability in Planar Acyclic Digraphs with Few Sources","authors":"Derrick Stolee, Chris Bourke, N. V. Vinodchandran","doi":"10.1109/CCC.2010.36","DOIUrl":"https://doi.org/10.1109/CCC.2010.36","url":null,"abstract":"Designing algorithms that use logarithmic space for graph reachability problems is fundamental to complexity theory. It is well known that for general directed graphs this problem is equivalent to the NL vs L problem. This paper focuses on the reachability problem over planar graphs where the complexity is unknown. Showing that the planar reachability problem is NL-complete would show that nondeterministic log-space computations can be made unambiguous. On the other hand, very little is known about classes of planar graphs that admit log-space algorithms. We present a new ‘source-based’ structural decomposition method for planar DAGs. Based on this decomposition, we show that reachability for planar DAGs with m sources can be decided deterministically in O(m+log n) space. This leads to a log-space algorithm for reachability in planar DAGs with O(log n) sources. Our result drastically improves the class of planar graphs for which we know how to decide reachability in deterministic log-space. Specifically, the class extends from planar DAGs with at most two sources to at most O(log n) sources.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"58 32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116509054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Simple Affine Extractors Using Dimension Expansion","authors":"Matt DeVos, Ariel Gabizon","doi":"10.1109/CCC.2010.14","DOIUrl":"https://doi.org/10.1109/CCC.2010.14","url":null,"abstract":"Let $F$ be the field of $q$ elements. An emph{afsext{n}{k}} is a mapping $D:F^narB$ such that for any $k$-dimensional affine subspace $Xsubseteq F^n$, $D(x)$ is an almost unbiased bit when $x$ is chosen uniformly from $X$. Loosely speaking, the problem of explicitly constructing affine extractors gets harder as $q$ gets smaller and easier as $k$ gets larger. This is reflected in previous results: When $q$ is `large enough', specifically $q= Omega(n^2)$, Gabizon and Raz cite{GR05} construct affine extractors for any $kgeq 1$. In the `hardest case', i.e. when $q=2$, Bourgain cite{Bour05} constructs affine extractors for $kgeq delta n$ for any constant (and even slightly sub-constant) $delta>0$. Our main result is the following: Fix any $kgeq 2$ and let $d = 5n/k$. Then whenever $q>2cdot d^2$ and $p=char(F)>d$, we give an explicit afsext{n}{k}. For example, when $k=delta n$ for constant $delta>0$, we get an extractor for a field of constant size $Omega(left(frac{1}{delta}right)^2)$. We also get weaker results for fields of arbitrary characteristic (but can still work with a constant field size when $k=delta n $ for constant $delta > 0$). Thus our result may be viewed as a `field-size/dimension' tradeoff for affine extractors. For a wide range of $k$ this gives a new result, but even for large $k$ where we do not improve (or even match) the previous result of cite{Bour05}, we believe that our construction and proof have the advantage of being very simple: Assume $n$ is prime and $d$ is odd, and fix any non-trivial linear map $T:F^nmapsto F$. Define $QR:Fmapsto B$ by $QR(x)=1$ if and only if $x$ is a quadratic residue. Then, the function $D:F^nmapsto B$ defined by $D(x)triangleq QR(T(x^d))$ is an afsext{n}{k}. Our proof uses a result of Heur, Leung and Xiang cite{HLX02} giving a lower bound on the dimension of products of subspaces.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115019436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trade-Off Lower Bounds for Stack Machines","authors":"Matei David, Periklis A. Papakonstantinou","doi":"10.1109/CCC.2010.23","DOIUrl":"https://doi.org/10.1109/CCC.2010.23","url":null,"abstract":"A space bounded Stack Machine is a regular Turing Machine with a read-only input tape, several space bounded read-write work tapes, and an unbounded stack. Stack Machines with a logarithmic space bound have been connected to other classical models of computation, such as polynomial time Turing Machines (P) (Cook; 1971) and polynomial size, polylogarithmic depth, bounded fan-in circuits (NC) e.g., (Borodin et al.; 1989). In this paper, we give the first known lower bound for Stack Machines. This comes in the form of a trade-off lower bound between space and number of passes over the input tape. Specifically, we give an explicit permuted inner product function such that any Stack Machine computing this function requires either sublinear polynomial space or sublinear polynomial number of passes. In the case of logarithmic space Stack Machines, this yields an unconditional sublinear polynomial lower bound for the number of passes. To put this result in perspective, we note that Stack Machines with logarithmic space and a single pass over the input can compute Parity, Majority, as well as certain languages outside NC. The latter follows from (Allender; 1989), conditional on the widely believed complexity assumption that EXP is different from PSPACE. Our technique is a novel communication complexity reduction, thereby extending the already wide range of models of computation for which communication complexity can be used to obtain lower bounds. Informally, we show that a k-player number-in-hand communication protocol for a base function f can efficiently simulate a space- and pass-bounded Stack Machine for a related function F, which consists of several permuted instances of f, bundled together by a combining function h. Trade-off lower bounds for Stack Machines then follow from known communication complexity lower bounds. The framework for this reduction was given by (Beame and Huynh-Ngoc; 2008), who used it to obtain similar trade-off lower bounds for Turing Machines with a constant number of pass-bounded external tapes. We also prove that the latter cannot efficiently simulate Stack Machines, conditional on the complexity assumption that E is not a subset of PSPACE. It is the treatment of an unbounded stack which constitutes the main technical novelty in our communication complexity reduction.","PeriodicalId":328781,"journal":{"name":"2010 IEEE 25th Annual Conference on Computational Complexity","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127678074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}