{"title":"Optimization of error recovery in syntax-directed Parsing algorithms","authors":"Jacques E. LaFrance","doi":"10.1145/800028.808489","DOIUrl":"https://doi.org/10.1145/800028.808489","url":null,"abstract":"The syntactic error recovery of automatically generated recognizers is considered with two related systems for automatically generating syntactic error recovery presented, one for a Floyd production language recognizer, the other for a recursive descent recognizer. The two systems have been implemented for a small language consisting of a subset of ALGOL. When compared with each other and with a commercial ALGOL compiler, the results indicate that automatically generated syntactic error recovery can exceed the performance of reasonable hand-coded error recovery.","PeriodicalId":399752,"journal":{"name":"Proceedings of a symposium on Compiler optimization","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1970-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132065518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"CDC 6600/7600 optimization","authors":"R. G. Zwakenberg","doi":"10.1145/390013.808491","DOIUrl":"https://doi.org/10.1145/390013.808491","url":null,"abstract":"Efficient use of the CDC 6600/7600 computers requires maximum utilization of the parallelism (6600/7600) and pipeline (7600) features of the functional units and the ability to perform iterative execution within a minimal number of machine words (6600/7600). Factors which must be taken into consideration when producing efficient object codes are: (1) The need for compression of code generated for loop structures, and (2) The criteria concerning instruction issue and execution times. The addition of an optimization pass to the LRLTRAN compiler has allowed the LRLTRAN language programmer to attain the maximum potential speeds of the 6600/7600 computers.","PeriodicalId":399752,"journal":{"name":"Proceedings of a symposium on Compiler optimization","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1970-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132274158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimization for an array computer","authors":"R. Millstein","doi":"10.1145/800028.808490","DOIUrl":"https://doi.org/10.1145/800028.808490","url":null,"abstract":"The unconventional design of the ILLIAC-IV requires unconventional optimization techniques. Conventional techniques focus on the program. Since conventional hardware executes one instruction at a time, greater efficiency is obtained by reducing the number of instructions executed. Elimination of common subexpressions and literal computations, removal of locally invariant computations, reduction of operator strength, etc. are all methods of restructuring a program to allow greater efficiency. This focus on the program is not sufficient for the ILLIAC-IV. Efficient use of an array of processors depends upon the data being stored so as to permit parallel execution on many data streams. Further, the inability of each processor to access more than 2K of memory requires the use of routing commands for inter-processor communication. Hence, optimization on an array computer requires restructuring of the data as the primary area of effort. Such restructuring includes, for example, an extension to the skewed storage method which permits any slice of any array to be accessed in parallel and, further, to be aligned with an other slice by a uniform route. (A slice of an n-dimentional array A is the vector {A(cl,..., ci−l, j, ci+l,..., cn: mi ≤ j ≤ Mi, ck constant}.)","PeriodicalId":399752,"journal":{"name":"Proceedings of a symposium on Compiler optimization","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1970-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122914502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Expression optimization using unary complement operators","authors":"D. Frailey","doi":"10.1145/800028.808485","DOIUrl":"https://doi.org/10.1145/800028.808485","url":null,"abstract":"For purposes of code optimization there are two basic philosophies of expression analysis: one approach would attempt to do a relatively complete analysis, detecting all redundancies which are logically possible. The other approach would aim at those things which are easily detected and/or highly likely to occur. This paper gives a set of algorithms which derive from the latter philosophy but which are based on general properties rather than specific facts about a particular language or machine. The first section of the paper gives details of a notation used for describing code and defining algorithms. The most significant feature of this notation is that it allows operands to be complemented by any number of “complement operators”. This is done because most of the algorithms make frequent use of the properties of such operators. The second section describes a canonical form for expressions and a series of algorithms based on this form and the properties of complement operators. There are various facets of compiler structure which might bear on the exact usage of these algorithms. Although such considerations are not part of the scope of this paper, occasional comments are made about the relationship of an algorithm to other parts of a compiler. The third section contains a discussion of how these algorithms would fit within an overall optimizer structure.","PeriodicalId":399752,"journal":{"name":"Proceedings of a symposium on Compiler optimization","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1970-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122291425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}