{"title":"The history of complexity","authors":"L. Fortnow","doi":"10.1109/CCC.2002.1004340","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004340","url":null,"abstract":"Summary form only given. We describe several trends in the history of computational complexity, including: the early history of complexity; the development of NP-completeness and the structure of complexity classes; how randomness, parallelism and quantum mechanics has forced us to reexamine our notions of efficient computation and how computational complexity has responded to these new models; the meteoric rise and fall of circuit complexity; and the marriage of complexity and cryptography and how research on a cryptographic model led to limitations of approximation.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130766007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sampling short lattice vectors and the closest lattice vector problem","authors":"M. Ajtai, Ravi Kumar, D. Sivakumar","doi":"10.1109/CCC.2002.1004339","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004339","url":null,"abstract":"We present a 2/sup O(n)/ time Turing reduction from the closest lattice vector problem to the shortest lattice vector problem. Our reduction assumes access to a subroutine that solves SVP exactly and a subroutine to sample short vectors from a lattice, and computes a (1+/spl epsi/)-approximation to CVP As a consequence, using the SVP algorithm from (Ajtai et al., 2001), we obtain a randomized 2[O(1+/spl epsi//sup -1/)n] algorithm to obtain a (1+/spl epsi/)-approximation for the closest lattice vector problem in n dimensions. This improves the existing time bound of O(n!) for CVP achieved by a deterministic algorithm in (Blomer, 2000).","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"322 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124553013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rapid mixing","authors":"P. Winkler","doi":"10.1109/CCC.2002.1004347","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004347","url":null,"abstract":"Summary form only given. In the past decade many proofs of tractability, have involved showing that some Markov chain \"mixes rapidly\". We do a fast tour of the highlights of Markov chain mixing, with a view toward answering, or at least addressing, the following questions: What is rapid mixing? How do you prove it, and why would you want to? Does it really have anything to do with computational complexity? And, most disturbing: is there any truth to the rumor that complexity and rapid mixing are related to phase transitions in statistical physics?.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124833240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pseudo-random generators and structure of complete degrees","authors":"Manindra Agrawal","doi":"10.1109/CCC.2002.1004349","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004349","url":null,"abstract":"It is shown that, if there exist sets in E (the exponential complexity class) that require 2/sup /spl Omega/(n)/-sized circuits, then sets that are hard for class P (the polynomial complexity class) and above, under 1-1 reductions, are also hard under 1-1 size-increasing reductions. Under the assumption of the hardness of solving the RSA (Rivest-Shamir-Adleman, 1978) problem or the discrete log problem, it is shown that sets that are hard for class NP (nondeterministic polynomial) and above, under many-1 reductions, are also hard under (non-uniform) 1-1 and size-increasing reductions.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"71 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114111575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resolution width-size trade-offs for the Pigeon-Hole Principle","authors":"Stefan S. Dantchev","doi":"10.1109/CCC.2002.1004337","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004337","url":null,"abstract":"We prove the following two results: (1) There is a resolution proof of the Weak Pigeon-Hole Principle, WPHP/sub n//sup m/of size 2/sup O([n log n/log m]+log m)/ for any number of pigeons m and any number of holes n. (2) Any resolution proof of WPHP/sub n//sup m/ of width (1/16 - /spl epsi/) n/sup 2/ has to be of size 2/sup /spl Omega/(n)/, independently from m.. These results give not only a resolution size-width tradeoff for the Weak Pigeon-Hole Principle, but also almost optimal such trade-off for resolution in general. The upper bound (1) may be of independent interest, as it has been known for the two extreme values of m, m = n + 1 and in = 2/sup /spl radic/(n log n)/, only.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132827649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resolution lower bounds for perfect matching principles","authors":"A. Razborov","doi":"10.1109/CCC.2002.1004336","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004336","url":null,"abstract":"For an arbitrary hypergraph H let PM(H) be the propositional formula asserting that H contains a perfect matching. We show that every resolution refutation of PM(H) must have size exp((/spl Omega/(/spl delta/(H)//spl lambda/(H)r(H)(log n(H))(r(H)+log n(H)))), where n(H) is the number of vertices, /spl delta/(H) is the minimal degree of a vertex, r(H) is the maximal size of an edge, and /spl lambda/(H) is the maximal number of edges incident to two different vertices. For ordinary graphs G our general bound considerably simplifies to exp (/spl Omega/(/spl delta/(G)/(log n(G))/sup 2/))). As a direct corollary, every resolution proof of the functional onto a version of the pigeonhole principle onto - FPHP/sub n//sup m/ must have size exp (/spl Omega/(n/(log m)/sup 2/)) (which becomes exp (/spl Omega/(n/sup 1/3/)) when the number of pigeons m is unbounded). This in turn immediately implies an exp(/spl Omega/(t/n/sup 3/)) lower bound on the size of resolution proofs of the principle circuit/sub t/(f/sub n/) asserting that the circuit size of the Boolean function f/sub n/ in n variables is greater than t. In particular resolution does not possess efficient proofs of NP /spl subne/ P/poly. These results relativize, in a natural way, to more general principle M(U|H) asserting that H contains a matching covering all vertices in U /spl sube/ V(H).","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134262059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Deshpande, Rahul Jain, T. Kavitha, J. Radhakrishnan, Satyanarayana V. Lokam
{"title":"Better lower bounds for locally decodable codes","authors":"A. Deshpande, Rahul Jain, T. Kavitha, J. Radhakrishnan, Satyanarayana V. Lokam","doi":"10.1109/CCC.2002.1004354","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004354","url":null,"abstract":"An error-correcting code is said to be locally decodable if a randomized algorithm can recover any single bit of a message by reading only a small number of symbols of a possibly corrupted encoding of the message. Katz and Trevisan (2000) showed that any such code C: {0, 1} /spl rarr/ /spl Sigma//sup m/ with a decoding algorithm that makes at most q probes must satisfy m = /spl Omega/((n/log |/spl Sigma/|)/sup q/(q-1)/). They assumed that the decoding algorithm is non-adaptive, and left open the question of proving similar bounds for adaptive decoders. We improve the results of Katz and Trevisan (2000) in two ways. First, we give a more direct proof of their result. Second, and this is our main result, we prove that m = /spl Omega/((n/log|/spl Sigma/|)/sup q/(q-1)/) even if the decoding algorithm is adaptive. An important ingredient of our proof is a randomized method for smoothing an adaptive decoding algorithm. The main technical tool we employ is the Second Moment Method.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114612588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Decoding concatenated codes using soft information","authors":"V. Guruswami, M. Sudan","doi":"10.1109/CCC.2002.1004350","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004350","url":null,"abstract":"We present a decoding algorithm for concatenated codes when the outer code is a Reed-Solomon code and the inner code is arbitrary. \"Soft\" information on the reliability of various symbols is passed by the inner decodings and exploited in the Reed-Solomon decoding. This is the first analysis of such a soft algorithm that works for arbitrary inner codes; prior analyses could only, handle some special inner codes. Crucial to our analysis is a combinatorial result on the coset weight distribution of codes given only its minimum distance. Our result enables us to decode essentially up to the \"Johnson radius\" of a concatenated code when the outer distance is large (the Johnson radius is the \"a priori list decoding radius\" of a code as a function of its distance). As a consequence, we are able to present simple and efficient constructions of q-ary linear codes that are list decodable up to a fraction (1 - 1/q - /spl epsiv/) of errors and have rate /spl Omega/(/spl epsiv//sup 6/). Codes that can correct such a large fraction of errors have found numerous complexity-theoretic applications. The previous constructions of linear codes with a similar rate used algebraic-geometric codes and thus suffered from a complicated construction and slow decoding.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128416713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Resolution lower bounds for the weak pigeon hole principle","authors":"Ran Raz","doi":"10.1109/CCC.2002.1004322","DOIUrl":"https://doi.org/10.1109/CCC.2002.1004322","url":null,"abstract":"We prove that any resolution proof for the weak pigeonhole principle, with n holes and any number of pigeons, is of length /spl Omega/(2/sup n/spl isin//), (for some constant /spl isin/ > 0). One corollary is that a certain propositional formulation of the statement NP /spl nsub/ P/poly does not have short resolution proofs.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124103834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the power of unique 2-prover 1-round games","authors":"Subhash Khot","doi":"10.1145/509907.510017","DOIUrl":"https://doi.org/10.1145/509907.510017","url":null,"abstract":"A 2-prover game is called unique if the answer of one prover uniquely determines the answer of the second prover and vice versa (we implicitly assume games to be one round games). The value of a 2-prover game is the maximum acceptance probability of the verifier over all the prover strategies. We make a conjecture regarding the power of unique 2-prover games, which we call the Unique Games Conjecture.","PeriodicalId":193513,"journal":{"name":"Proceedings 17th IEEE Annual Conference on Computational Complexity","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133317066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}