{"title":"Complexity measures and hierarchies for the evaluation of integers, polynomials, and n-linear forms","authors":"R. Lipton, D. Dobkin","doi":"10.1145/800116.803746","DOIUrl":"https://doi.org/10.1145/800116.803746","url":null,"abstract":"The difficulty of evaluating integers and polynomials has been studied in various frameworks ranging from the addition-chain approach [5] to integer evaluation to recent efforts aimed at generating polynomials that are hard to evaluate [2,8,10]. Here we consider the classes of integers and polynomials that can be evaluated within given complexity bounds and prove the existence of proper hierarchies of complexity classes. The framework in which our problems are cast is general enough to allow any finite set of binary operations rather than just addition, subtraction, multiplication, and division. The motivation for studying complexity classes rather than specific integers or polynomials is analogous to why complexity classes are studied in automata-based complexity: (i) the immense difficulty associated with computing the complexity of a specific integer or polynomial; (ii) the important insight obtained from discovering the structure of the complexity classes.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"90 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87394249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intercalation theorems for tree transducer languages","authors":"C. Raymond Perrault","doi":"10.1145/800116.803761","DOIUrl":"https://doi.org/10.1145/800116.803761","url":null,"abstract":"We develop intercalation lemmas for the computations of the top-down tree transducers defined by Rounds [15] and Thatcher [17]. These lemmas are used to prove necessary conditions for languages all of whose strings are of exponential length to be tree transducer languages. The language {ww:w&egr;{a,b}*, ¦w¦=2n,n≥0}, which is generable by the composition of two transducers, is shown not to be generable by one. The proof technique applies to bottom-up transducers as well. The results are related to some subclasses of Woods' Augmented Transition Networks [18] characterized elsewhere in terms of tree transducer languages [14].","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88826836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On non-linear lower bounds in computational complexity","authors":"L. Valiant","doi":"10.1145/800116.803752","DOIUrl":"https://doi.org/10.1145/800116.803752","url":null,"abstract":"The purpose of this paper is to explore the possibility that purely graph-theoretic reasons may account for the superlinear complexity of wide classes of computational problems. The results are therefore of two kinds: reductions to graph theoretic conjectures on the one hand, and graph theoretic results on the other. We show that the graph of any algorithm for any one of a number of arithmetic problems (e.g. polynomial multiplication, discrete Fourier transforms, matrix multiplication) must have properties closely related to concentration networks.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78411620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the complexity of the Extended String-to-String Correction Problem","authors":"R. Wagner","doi":"10.1145/800116.803771","DOIUrl":"https://doi.org/10.1145/800116.803771","url":null,"abstract":"The Extended String-to-String Correction Problem [ESSCP] is defined as the problem of determining, for given strings A and B over alphabet V, a minimum-cost sequence S of edit operations such that S(A) = B. The sequence S may make use of the operations: Change, Insert, Delete and Swaps, each of constant cost WC, WI, WD, and WS respectively. Swap permits any pair of adjacent characters to be interchanged. The principal results of this paper are: (1) a brief presentation of an algorithm (the CELLAR algorithm) which solves ESSCP in time Ø(¦A¦* ¦B¦* ¦V¦s*s), where s = min(4WC, WI+WD)/WS + 1; (2) presentation of polynomial time algorithms for the cases (a) WS = 0, (b) WS > 0, WC= WI= WD= @@@@; (3) proof that ESSCP, with WI < WC = WD = @@@@, 0 < WS < @@@@, suitably encoded, is NP-complete. (The remaining case, WS= @@@@, reduces ESSCP to the string-to-string correction problem of [1], where an Ø( ¦A¦* ¦B¦) algorithm is given.) Thus, “almost all” ESSCP's can be solved in deterministic polynomial time, but the general problem is NP-complete.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87726512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feasibly constructive proofs and the propositional calculus (Preliminary Version)","authors":"S. Cook","doi":"10.1145/800116.803756","DOIUrl":"https://doi.org/10.1145/800116.803756","url":null,"abstract":"The motivation for this work comes from two general sources. The first source is the basic open question in complexity theory of whether P equals NP (see [1] and [2]). Our approach is to try to show they are not equal, by trying to show that the set of tautologies is not in NP (of course its complement is in NP). This is equivalent to showing that no proof system (in the general sense defined in [3]) for the tautologies is “super” in the sense that there is a short proof for every tautology. Extended resolution is an example of a powerful proof system for tautologies that can simulate most standard proof systems (see [3]). The Main Theorem (5.5) in this paper describes the power of extended resolution in a way that may provide a handle for showing it is not super. The second motivation comes from constructive mathematics. A constructive proof of, say, a statement @@@@×A must provide an effective means of finding a proof of A for each value of x, but nothing is said about how long this proof is as a function of x. If the function is exponential or super exponential, then for short values of x the length of the proof of the instance of A may exceed the number of electrons in the universe. In section 2, I introduce the system PV for number theory, and it is this system which I suggest properly formalizes the notion of a feasibly constructive proof.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88281754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On computing the minima of quadratic forms (Preliminary Report)","authors":"A. Yao","doi":"10.1145/800116.803749","DOIUrl":"https://doi.org/10.1145/800116.803749","url":null,"abstract":"The following problem was recently raised by C. William Gear [1]: Let F(x1,x2,...,xn) = &Sgr;i≤j a'ijxixj + &Sgr;i bixi +c be a quadratic form in n variables. We wish to compute the point x→(0) = (x1(0),...,xn(0)), at which F achieves its minimum, by a series of adaptive functional evaluations. It is clear that, by evaluating F(x→) at 1/2(n+1)(n+2)+1 points, we can determine the coefficients a'ij,bi,c and thereby find the point x→(0). Gear's question is, “How many evaluations are necessary?” In this paper, we shall prove that O(n2) evaluations are necessary in the worst case for any such algorithm.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"36 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76581971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proving assertions about programs that manipulate data structures","authors":"D. Oppen, S. Cook","doi":"10.1145/800116.803758","DOIUrl":"https://doi.org/10.1145/800116.803758","url":null,"abstract":"In this paper we wish to consider the problem of proving assertions about programs that construct and alter data structures. Our method will be to define a suitable assertion language L for data structures, to define a simple programming language L' for constructing and altering data structures, to give axioms and rules of inference (in the style of [Hoare 1969]) which specify the effect of program segments on data structures (described by formulas in L) and finally to prove that these axioms are correct (relative to a formal definition of the semantics of L') and, in a reasonable sense, complete. Thus our intention is to provide a complete theoretical framework for describing arbitrary data structures and proving assertions about programs that manipulate them.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72667161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Degree-languages, polynomial time recognition, and the LBA problem","authors":"D. Wotschke","doi":"10.1145/800116.803763","DOIUrl":"https://doi.org/10.1145/800116.803763","url":null,"abstract":"The so-called Chomsky hierarchy [5], consisting of regular, context-free, context-sensitive, and recursively enumerable languages, does not account for many “real world” classes of languages, e.g., programming languages and natural languages [4]. This is one of the reasons why many attempts have been made to “refine” the original Chomsky classification. The main goal has been to describe languages which, for instance, are not context-free but are still context-sensitive, without using the powerful and complex concept of context-sensitive grammars.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77670742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Two applications of a probabilistic search technique: Sorting X+Y and building balanced search trees","authors":"M. Fredman","doi":"10.1145/800116.803774","DOIUrl":"https://doi.org/10.1145/800116.803774","url":null,"abstract":"Let X = {x1,...,xN} and Y = {y1,...,yN} be sets of N real numbers. We denote by X + Y the multiset {xi + yj; 1 ≤ i, j ≤ N} of size N2. Berklekamp has posed the problem of sorting X + Y. Harper, Payne, Savage and Strauss [1] show that N21og2N comparisons suffice to sort X + Y, thereby saving a factor of 2 over sorting without exploiting the structure of X + Y. (Given u in X + Y, we assume that we know the i,j indices such that u = xi + yj.) Furthermore, they show that this bound is tight for a restricted class of comparison algorithms. However, without their restriction the order of magnitude comparison complexity of this problem has remained an open question. In this paper we show that X + Y can be sorted with O(N2) comparisons. Our proof is unusual for this type of problem in that we do not explicitly exhibit an algorithm. Instead, it is a particular application of a more general search technique whose behavior is easily related to information theoretic lower bounds. In the context of sorting, this search method translates into an insertion sort, where the insertions are not performed by means of the usual binary search, but rather as off-centered searches designed so that each comparison, roughly speaking, equally divides the space of remaining possibilities. We draw attention to this search technique because it might find application to other problems, and we illustrate this possibility with a second application. Our second application concerns the construction of probabilistically balanced binary search trees.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"97 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78676578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Four models for the analysis and optimization of program control structures","authors":"T. W. Pratt","doi":"10.1145/800116.803766","DOIUrl":"https://doi.org/10.1145/800116.803766","url":null,"abstract":"The analysis of the relation between the structure of a program and the function that it computes requires a decomposition of the program into its components. Traditionally this decomposition has been based on the common division of a program into subprograms, and ultimately into statements, expressions and individual variables and constants. In this paper an alternative decomposition is proposed that is based on the decomposition of a program into a set of kernel elements, those program elements that participate in the direct computation of the outputs of the program, and a set of control elements, those elements that participate in the determination of the execution path. The kernel-control decomposition of a program leads to a series of progressively more abstract program representations, each of which has both theoretical and practical interest. The separation of control structure from kernel and the three abstract models presented here, which are based on this decomposition, are particularly valuable in the analysis and optimization of program control structures. This research summary outlines the major results, which will be reported in full in a journal article.","PeriodicalId":20566,"journal":{"name":"Proceedings of the forty-seventh annual ACM symposium on Theory of Computing","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"1975-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74125244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}