{"title":"Controlling concurrency using locking protocols","authors":"Z. Kedem, A. Silberschatz","doi":"10.1109/SFCS.1979.12","DOIUrl":"https://doi.org/10.1109/SFCS.1979.12","url":null,"abstract":"This paper is concerned with the problem of developing locking protocols for ensuring the consistency of database systems that are accessed concurrently by a number of independent transactions. It is assumed that the database is modelled by a directed acyclic graph whose vertices correspond to the database entities, and whose arcs correspond to certain locking restrictions. Several locking protocols are presented. The weak protocol is shown to ensure consistency and deadlock-freedom only for databases that are organized as trees. For the databases that are organized as directed acyclic graphs, the strong protocol is presented. Discussion of SHARED and EXCLUSIVE locks is also included.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127119312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Approximate algorithms for optimization of busy waiting in parallel programs","authors":"E. Clarke, Lishing Liu","doi":"10.1109/SFCS.1979.5","DOIUrl":"https://doi.org/10.1109/SFCS.1979.5","url":null,"abstract":"Traditional implementations of conditional critical regions and monitors can lead to unproductive \"busy waiting\" if processes are allowed to wait on arbitrary boolean expressions. Techniques from global flow analysis may be employed at compile time to obtain information about which critical regions (monitor calls) are enabled by the execution of a given critical region (monitor call). We investigate the complexity of computing this information and show how it can be used to obtain efficient scheduling algorithms with less busy waiting.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128394491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A subexponential algorithm for the discrete logarithm problem with applications to cryptography","authors":"L. Adleman","doi":"10.1109/SFCS.1979.2","DOIUrl":"https://doi.org/10.1109/SFCS.1979.2","url":null,"abstract":"In 1870 Bouniakowsky [2 J publ ished an algorithm to solve the congruence aX _ bMOD (q). While his algorithm contained several clever ideas useful for small numbers, its asymptotic complexity was O(q). Despite its long history, no fast algorithm has ever emerged for the Discrete Logarithm Problem and the best published method, due to Shanks [lOJ requires O(ql/2) in time and space. The problem has attracted renewed interest in recent years because of its use in cryptography [7 ], [15J,[19J. In particular, the security of the Diffie-Hellman Public Key Distribution Sy s t em [7 J II de pen d s c r ucia 11yon the d iff i c u1t Y 0 f com put i ng log a r i t hms MOD q II • We present a new algorithm for this problem which runs in RTIME better than O(qE) for all E > O.t While no effort is made to present the most efficient incarnation of tActually our algorithm runs in RTIME O(2(O(/10g(q)loglog(q))). RTIME denotes Random Time and refers to algorithms which may use random numbers in their processing. For example, the well known composite testing algorithms of Solovay &Strassen [21J, Miller [11J and Rabin [16J run in RTIME (0(log3(q))). For precise definitions see [1], [llJ and [9J.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128771262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A TcS2 = 0 (2n) time/space tradeoff for certain NP-complete problems","authors":"R. Schroeppel, A. Shamir","doi":"10.1109/SFCS.1979.3","DOIUrl":"https://doi.org/10.1109/SFCS.1979.3","url":null,"abstract":"In this paper we develop a general purpose algorithm that can solve a number of NP-complete problems in time T = O(2n/2) and space S = O(2n/4). The algorithm can be generalized to a family of algorithms whose time and space complexities are related by T¿S2 = O(2n). The problems it can handle are characterized by a few decomposition axioms, and they include knapsack problems, exact satisfiability problems, set covering problems, etc. The new algorithm has a considerable cryptanalytic significance, since it can break the Merkle-Hellman public key cryptosystem whose recommended size is n = 100.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129331999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"New classes and applications of hash functions","authors":"M. Wegman, L. Carter","doi":"10.1109/SFCS.1979.26","DOIUrl":"https://doi.org/10.1109/SFCS.1979.26","url":null,"abstract":"In this paper we exhibit several new classes of hash functions with certain desirable properties, and introduce two novel applications for hashing which make use of these functions. One class of functions is small, yet is almost universal2. If the functions hash n-bit long names into m-bit indices, then specifying a member of the class requires only O((m + log2log2(n)) log2(n)) bits as compared to O(n) bits for earlier techniques. For long names, this is about a factor of m larger than the lower bound of m+log2n-log2m bits. An application of this class is a provably secure authentication techniques for sending messages over insecure lines. A second class of functions satisfies a much stronger property than universal2. We present the application of testing sets for equality. The authentication technique allows the receiver to be certain that a message is genuine. An 'enemy' - even one with infinite computer resources - cannot forge or modify a message without detection. The set equality technique allows the the operations 'add member to set', 'delete member from set' and 'test two sets for equality' to be performed in expected constant time and with less than a specified probability of error.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133892300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Complexity of partial satisfaction","authors":"K. Lieberherr, E. Specker","doi":"10.1145/322248.322260","DOIUrl":"https://doi.org/10.1145/322248.322260","url":null,"abstract":"A conjunctive normal form (cnf) is 2-satisfiable, iff any 2 of its clauses are satisfiable. It is shown that every 2-satisfiable cnf s has an interpretation which satisfies at least h¿length(s) clauses (h=(√5-1)/2∼0.618). This result is optimal, insofar as the given constant h is maximal. The proof is polynomially constructive, i.e., it yields a polynomial algorithm, which computes an interpretation satisfying h¿length(s) clauses for the 2-satisfiable cnf's s. Moreover, if h¿h' and h' is e.g. algebraic, the following set is NP-complete: The 2-satisfiable cnf's s having an interpretation which satisfies at least h'¿length(s) clauses.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128852082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Length of predicate calculus formulas as a new complexity measure","authors":"N. Immerman","doi":"10.1109/SFCS.1979.21","DOIUrl":"https://doi.org/10.1109/SFCS.1979.21","url":null,"abstract":"We introduce a new complexity measure, QR[f(n)], which clocks the size of formulas from predicate calculus needed to express a given property. Techniques from logic are used to prove sharp lower bounds in the measure. These results demonstrate space requirements for computations and may provide techniques for seperating Time and Space complexity classes because we show that: NSPACE[f(n)] ⊆ QR[(f(n))2/log(n)] ⊆ DSPACE[f(n)2].","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124078190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Observations about the development of theoretical computer science","authors":"J. Hartmanis","doi":"10.1109/SFCS.1979.27","DOIUrl":"https://doi.org/10.1109/SFCS.1979.27","url":null,"abstract":"This paper gives a personal account of some developments in automata theory and computational complexity theory. Though the account is subjective and deals primarily with the research areas of direct interest to the author, it discusses the underlying beliefs and philosophy which guided this research as well as the intellectual environment and the ideas and contacts which influenced it. An attempt is also made to draw some general conclusions about computer science research and to discuss the nature of theoretical computer science.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121580862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Field extension and trilinear aggregating, uniting and canceling for the acceleration of matrix multiplications","authors":"V. Pan","doi":"10.1109/SFCS.1979.17","DOIUrl":"https://doi.org/10.1109/SFCS.1979.17","url":null,"abstract":"The acceleration of matrix multiplication MM, is based on the combination of the method of algebraic field extension due to D. Bini, M. Capovani, G. Lotti, F. Romani and S. Winograd and of trilinear aggregating, uniting and canceling due to the author. A fast algorithm of O(N2.7378) complexity for N × N matrix multiplication is derived. With A. Schönhage's Theorem about partial and total MM, our approach gives the exponent 2.6054 by the price of a serious increase of the constant.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116056328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On a general method for maximizing and minimizing among certain geometric problems","authors":"D. Dobkin, Lawrence Snyder","doi":"10.1109/SFCS.1979.28","DOIUrl":"https://doi.org/10.1109/SFCS.1979.28","url":null,"abstract":"Problems concerned with finding inscribing or circumscribing polygons that maximize some measurement are considered such as: Find an area maximizing triangle inscribed in a given convex polygon. Algorithms solving a number of these problems in linear time are presented. They use the common approach of finding an initial solution with respect to a fixed bounding point and then iteratively transforming this solution into a new solution with respect to a new point. The generality of this approach is discussed and several open problems are noted.","PeriodicalId":311166,"journal":{"name":"20th Annual Symposium on Foundations of Computer Science (sfcs 1979)","volume":"164 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1979-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133401121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}