{"title":"Database theory and cylindric lattices","authors":"S. Cosmadakis","doi":"10.1109/SFCS.1987.17","DOIUrl":"https://doi.org/10.1109/SFCS.1987.17","url":null,"abstract":"The relational model for databases [Co I, Co2] provides a valuable formal foundation for understanding the issues of database design. A relational database consists of a set of tables (relations), where each table contains a set of records (tuples). For example, we might have a database with two relations Rand S, where R has two columns labeled EMP (employee) and SAL (salary), and S has two columns labeled PRJ (project) and MGR (manager).","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126136359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Finding near optimal separators in planar graphs","authors":"Satish Rao","doi":"10.1109/SFCS.1987.26","DOIUrl":"https://doi.org/10.1109/SFCS.1987.26","url":null,"abstract":"A k-ratio edge separator is a set of edges which separates a weighted graph into two disconnected sets of components neither of which contains more than k-1/k of the original graph's weight. An optimal quotient separator is an edge separator where the size of the separator (i.e., the number of edges) divided by the weight of the smaller set of components is minimized. An optimal quotient k-ratio separator is an edge separator where the size of the separator (i.e., the number of edges) divided by the smaller of either 1/k of the total weight or the weight of the smaller set of components is minimized. In this paper we present an algorithm that finds the optimal quotient k-ratio separator for any k ≥ 3. We use the optimal quotient algorithm to obtain approximation algorithms for finding optimal k-ratio edge separators for any k ≥ 3. Given a planar graph with a size OPT k-ratio separator, we describe an algorithm which a finds k-ratio separator which costs less than O(OPT log n). More importantly the algorithm finds ck-ratio separators (for any c ≫ 1) which cost less than C(c)OPT, where C(c) depends only on c.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115531581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An output sensitive algorithm for computing visibility graphs","authors":"S. Ghosh, D. Mount","doi":"10.1137/0220055","DOIUrl":"https://doi.org/10.1137/0220055","url":null,"abstract":"The visibility graph of a set of nonintersecting polygonal obstacles in the plane is an undirected graph whose vertices are the vertices of the obstacles and whose edges are pairs of vertices (u, v) such that the open line segment between u and v does not intersect any of the obstacles. The visibility graph is an important combinatorial structure in computational geometry and is used in applications such as solving visibility problems and computing shortest paths. An algorithm is presented that computes the visibility graph of s set of obstacles in time O(E + n log n), where E is the number of edges in the visibility graph and n is the total number of vertices in all the obstacles.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131294971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The complexity of parallel comparison merging","authors":"Mihály Geréb-Graus, D. Krizanc","doi":"10.1109/SFCS.1987.55","DOIUrl":"https://doi.org/10.1109/SFCS.1987.55","url":null,"abstract":"We prove a worst case lower bound of Ω(log log n) for randomized algorithms merging two sorted lists of length n in parallel using n processors on Valiant's parallel computation tree model. We show how to strengthen this result to a lower bound for the expected time taken by any algorithm on the uniform distribution. Finally, bounds are given for the average time required for the problem when the number of processors is less than and greater than n.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127201748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Some polynomial and Toeplitz matrix computations","authors":"V. Pan, J. Reif","doi":"10.1109/SFCS.1987.52","DOIUrl":"https://doi.org/10.1109/SFCS.1987.52","url":null,"abstract":"Part 1. Approximate Evaluation of Polynomial Zeros O(n2(1og2n+log b)) arithmetic operations or O( n(log2n+log b) parallel steps, n processors suffice in order to approximate with absolute error ~ 2mb to all the complex zeros of an n-th degree polynomial p(x) whose coefficients have moduli < 2m• If we only need such an approximation to a single zero of p(x), then O(n log n(n+log b)) arithmetic operations or O(log n(log2n+log b)) steps and n+n/(loin+log b) processors suffice (which places the latter problem in NC); furthermore if all the zeros are real, then we arrive at the bounds O(n log n(log3n+log b)), O(log n(log3+log b)), and n, respectively. Those estimates are reached in computations with O(nb) binary bits where the polynomial· has integer coefficients. This also implies a simple proof of the Boolean circuit complexity estimates for the approximation of all the complex zeros of p(x), announced in 1982 and partly proven by Schonhage. The computations rely on recursive application of Turan's proximity test of 1968, on its more recent extensions to root radii computations, and on contour integration via FFT within our modifications of the known geometric constructions for search and exclusion.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121591461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiplicative complexity of polynomial multiplication over finite fields","authors":"M. Kaminski, N. Bshouty","doi":"10.1145/58562.59306","DOIUrl":"https://doi.org/10.1145/58562.59306","url":null,"abstract":"Let Mq(n) denote the number of multiplications required to compute the coefficients of the product of two polynomials of degree n over a q-element field by means of bilinear algorithms. It is shown that Mq(n) ≥ 3n - o(n). In particular, if q/2 ≪ n ≤ q + 1, we establish the tight bound Mq(n) = 3n + 1 - ⌊q/2⌋. The technique we use can be applied to analysis of algorithms for multiplication of polynomials modulo a polynomial as well.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"408 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122735018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Perfect zero-knowledge languages can be recognized in two rounds","authors":"W. Aiello, J. Håstad","doi":"10.1109/SFCS.1987.47","DOIUrl":"https://doi.org/10.1109/SFCS.1987.47","url":null,"abstract":"A hierarchy of probabilistic complexity classes generalizing NP has recently emerged in the work of [Ba], [GMR], and [GS]. The IP hierarchy is defined through the notion of an interactive proof system, in which an all powerful prover tries to convince a probabilistic polynomial time verifier that a string w is in a language L. The verifier tosses coins and exchanges messages back and forth with the prover before he decides whether to accept w. This proof-system yields \"probabilistic\" proofs: the verifier may erroneously accept or reject w with small probability. In [GMR] such a protocol was defined to be a zero-knowledge protocol if at the end of the interaction the verifier has learned nothing except that w ∈ L. We study complexity theoretic implications of a language having this property. In particular we prove that if L admits a zeroknowledge proof then L can also be recognized by a two round interactive proof. This complements a result by Fortnow [F] where it is proved that the complement of L has a two round interactive proof protocol. The methods of proof are quite similar to those of Fortnow [F]. As in his case the proof works under the assumption that the original protocol is only zero-knowledge with respect to a specific verifier.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128257314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Polytope range searching and integral geometry","authors":"B. Chazelle","doi":"10.1109/SFCS.1987.48","DOIUrl":"https://doi.org/10.1109/SFCS.1987.48","url":null,"abstract":"plexity ofsimplex range searching. We prove that the worst-case query time is 0. (n/vm), for d = 2, and more generally, 0. (nl log n)/m 1 / d ) , for d ~ 3; n is the number of points and m is the amount of stor age available. These bounds hold with high probability for a random point-set (from a uniform distribution) and thus are valid in the worst case as well as on the average. Interestingly, they still hold if the query remains congruent to a fixed simplex or even a fixed slab. What is the significance of these lower bounds? From a practical standpoint the news is disheartening but instructive. For the sake of il lustration, take d = 11: our results say that with only linear storage the query time will have to be at least 0'(nO. 9 ). To make matters worse, this quasi-linear lower bound also holds on the average, so it is un escapable in practice. For the query time to be lowered to, say, O(y'n), one would need g(n S ) storage, and a whopping n(n 10 ) space would be necessary if a polylogarithmic query time were desired. Things are","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133149513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The average complexity of deterministic and randomized parallel comparison sorting algorithms","authors":"N. Alon, Y. Azar","doi":"10.1137/0217074","DOIUrl":"https://doi.org/10.1137/0217074","url":null,"abstract":"In practice, the average time of (deterministic or randomized) sorting algorithms seems to be more relevant than the worst case time of deterministic algorithms. Still, the many known complexity bounds for parallel comparison sorting include no nontrivial lower bounds for the average time required to sort by comparisons n elements with p processors (via deterministic or randomized algorithms). We show that for p ≥ n this time is Θ (log n/log(1 + p/n)), (it is easy to show that for p ≤ n the time is Θ (n log n/p) = Θ (log n/(p/n)). Therefore even the average case behaviour of randomized algorithms is not more efficient than the worst case behaviour of deterministic ones.","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121201168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the cunning power of cheating verifiers: Some observations about zero knowledge proofs","authors":"Yair Oren","doi":"10.1109/SFCS.1987.43","DOIUrl":"https://doi.org/10.1109/SFCS.1987.43","url":null,"abstract":"In this paper we investigate some properties of zero-knowledge proofs, a notion introduced by Goldwasser, Micali and Rackoff. We introduce and classify various definitions of zero-knowledge. Two definitions which are of special interest are auxiliary-input zero-knowledge and blackbox-simulation zero-knowledge. We explain why auxiliary-input zero-knowledge is a definition more suitable for cryptographic applications than the original [GMR1] definition. In particular, we show that any protocol composed of subprotocols which are auxiliary-input zero-knowledge is itself auxiliary-input zero-knowledge. We show that blackbox simulation zero-knowledge implies auxiliary-input zeroknowledge (which in turn implies the [GMR1] definition). We argue that all known zero-knowledge proofs are in fact blackbox-simulation zero-knowledge (i.e. were proved zero-knowledge using blackbox-simulation of the verifier). As a result, all known zero-knowledge proof systems are shown to be auxiliary-input zero-knowledge and can be used for cryptographic applications such as those in [GMW2]. We demonstrate the triviality of certain classes of zero-knowledge proof systems, in the sense that only languages in BPP have zero-knowledge proofs of these classes. In particular, we show that any language having a Las vegas zeroknowledge proof system necessarily belongs to R. We show that randomness of both the verifier and the prover, and nontriviality of the interaction are essential properties of non-trivial auxiliary-input zero-knowledge proofs. In order to derive most of the results in the paper we make use of the full power of the definition of zero-knowledge: specifically, the requirement that there exist a simulator for any verifier, including \"cheating verifiers\".","PeriodicalId":153779,"journal":{"name":"28th Annual Symposium on Foundations of Computer Science (sfcs 1987)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1987-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121876736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}