{"title":"Parallel image generation for a 3D display","authors":"T. Theoharis, A. Travis, N. Wiseman","doi":"10.1109/PARBSE.1990.77177","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77177","url":null,"abstract":"Two viewing models for an experimental three-dimensional display are presented. Two alternative projections were tried. These are parallel oblique and perspective oblique, and they place different requirements on the 3-D display hardware. The parallel oblique projections may produce a jagged effect when P is small but will correctly maintain the horizontal parallax effect for a range of distances between screen and viewer. With the aid of parallel processing the time required to change the 3-D image will be comparable with the time taken to alter the image on a conventional display.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115170581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On-the-fly processing of continuous data streams with a pipeline of microprocessors","authors":"S. Berkovich, Z. Kitov, A. Meltzer","doi":"10.1109/PARBSE.1990.77182","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77182","url":null,"abstract":"A pipeline of microprocessors which is able to perform substantial on-the-fly transformations with large amounts of data has been developed. The general concept is a pipelined structure for associative processing. This structure is based on the use of a long sequence (pipeline) of identical, relatively simple associative pattern matching/transforming elements. The associative pipeline achieves high performance for relatively small algorithms with a large volume of data and so is well suited for use with very large databases. The effectiveness of the developed pipeline has been analyzed for various database applications. This system can implement basic operations of relational algebra, as well as rather sophisticated filtering functions; in particular, it can be used to control the I/O operations for the purpose of computer security.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124369693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"OSCAR: an architecture for weak-consistency replication","authors":"A. Downing, I. Greenberg, J. Peha","doi":"10.1109/PARBSE.1990.77160","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77160","url":null,"abstract":"An architecture for providing weak-consistency replication for databases in an internetwork is presented. It is designed to make the databases highly available and to operate reliably under difficult conditions, such as unreliable communication, low-bandwidth communication, network partitions, and host failures. Updates are stored in logs until they have been propagated to all database sites and properly delivered to the databases. A novel approach called mediation is used to provide integrated support for reliable replication and log purging. Other interesting features include requiring minimal support from database management systems, support of multiple weak-consistency methods, and easy tuning of the architecture's basic algorithms to particular environments.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121205874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparison of immediate-update and workspace transactions: serializability and failure tolerance","authors":"S. Turc","doi":"10.1109/PARBSE.1990.77158","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77158","url":null,"abstract":"A theoretical study of concurrency control and failure tolerance which includes both immediate-update (IU) and workspace (WS) transactions is presented. All previous formal approaches only consider IU transactions. A WS transaction first reads objects and updates them only in its private workspace; the objects are written only after the WS transaction commits. In order to examine execution correctness for both transaction models, it is necessary to reshape serializability theory. The framework constructed here handles both transaction and system failures and covers IU and WS transactions. The results show that the two transaction types impose different conditions on schedulers and recovery algorithms and deny the fact that WS transactions require optimistic schedulers and 'intention list' recovery. In comparing the two models, some histories of WS transactions which could not be obtained with IU transactions are given.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123662475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A high speed KDL-RAM file system for parallel computers","authors":"S. Pramanik, C. Severance, T. Rosenau","doi":"10.1109/PARBSE.1990.77141","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77141","url":null,"abstract":"The design, implementation, and performance of a main memory file system are presented. The implementation is based on a two-stage abstract parallel processing model. The objective of this model is to maximize throughput and minimize response time. To maximize throughput, lock structures, access structures, and shared variables are distributed among the shared memories. A novel approach based on hash-based parallel accesses is used. The effect of lock conflict is minimized by an optimistic locking protocol. Analytical models are developed for hot spot memory accesses, distributed data accesses, and space-versus-time tradeoffs for fast accesses to records. On the basis of the performance results of these models, a high-speed KDL-RAM (key accessed, dynamically reconfigurable, distributed locked random-access memory) file system has been implemented on the Butterfly PLUS Parallel Processor. Various performance results of this system are given. It is shown that the performance improvement of this system is considerably better than BBN's Butterfly RAMFile system on the Butterfly PLUS Parallel Processor.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"138 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122949433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conformance of Chinese text to Zipf's law","authors":"J. Clark, K. Lua, J. McCallum","doi":"10.1109/PARBSE.1990.77200","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77200","url":null,"abstract":"An investigation was carried out to determine whether Chinese text material conforms to Zipf's law. The information reservoir for this particular investigation contains 2,022,604 Chinese ideograms. It is shown that single Chinese characters do not conform to Zipf's law; however compound words are found to conform well. In addition, examining the regression analysis for compound words implies a good degree of conformity.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114190426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"VLSI parallel and distributed processing algorithms for multidimensional discrete cosine transforms","authors":"Tze-Yun Sung","doi":"10.1109/PARBSE.1990.77176","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77176","url":null,"abstract":"A VLSI parallel and distributed computation algorithm has been proposed and mapped onto a VLSI architecture for a 1-D discrete cosine transform (DCT) involving the symmetry property. In this 1-D DCT processor architecture, there are (log/sub 2/2N) DCT processor units (PUs) required for computation of a frame of N-point data with a time complexity of O(N). Further, a proposed 2-D DCT processor architecture requires (M(log/sub 2/2N)+N(log/sub 2/2M)) PUs with a time complexity of O(M+N). An optimal architecture for computation of a multidimensional DCT has been proposed. The 3-D DCT processor architecture requires NL log/sub 2/2M+LM log/sub 2/2N+MN log/sub 2/2L PUs with a time complexity of O(M+N+L). All architectures can be controlled by firmware; hence they are more flexible, efficient, and fault-tolerant and therefore very suitable for VLSI implementation.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"34 14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116277445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance evaluation of a new optimistic concurrency control algorithm","authors":"Jonathan Addess, E. Gudes, D. Tal, N. Rishe","doi":"10.1109/PARBSE.1990.77194","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77194","url":null,"abstract":"A modification of the classic Kung-Robinson timestamp-based concurrency control algorithm is described. The algorithm is based on two innovative techniques: query killing notes and weak serializability of transactions. In particular, it prefers long transactions over short queries and thus reduces considerably the number of transaction rollbacks required. In order to test the validity and evaluate the performance of the proposed algorithm, a simulation program was written and run using a realistic set of transactions. The simulation was performed using Flat Concurrent Prolog (FCP). The advantages of FCP for specifying and implementing parallel algorithms include its refined granularity of parallelism, its declarativeness and conciseness, and its powerful communication and synchronization primitives. Results of algorithm performance are presented.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116305950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performing logical database design using an E-R graph rewriting system","authors":"C.J. Breiteneder, T. Muck","doi":"10.1109/PARBSE.1990.77139","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77139","url":null,"abstract":"The authors present a formalism which restricts the freedom of connecting different entity-relationship constructs so that only syntactically and semantically well-formed diagrams can be designed. The methods used in this formalism are graph rewriting for the generation of conceptual structures, string rewriting for graph markings, and assertions for establishing the semantic correctness of the generated diagram. The main purpose of this research work is the formal specification of a design tool which supports relational database design with different design goals. Further goals are the ease of application of the resulting methodology, even without tool support, and the possibility of changing the behavior of the design tool easily.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"128 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132709194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient processing of distributed set queries","authors":"M. El-Sharkawi, Y. Kambayashi","doi":"10.1109/PARBSE.1990.77111","DOIUrl":"https://doi.org/10.1109/PARBSE.1990.77111","url":null,"abstract":"The problem of efficiently processing queries that manipulate sets is considered with the objective of minimizing the processing cost by reducing the size of transmitted data as much as possible. The semantics of set operations is used to achieve this goal. A set query has the general form SET 1 op SET 2. For two sets to be related by a set operation, their sizes should satisfy a necessary condition. For the two sets to be equal, they should have the same size. For SET 1 to be a subset of SET 2, its size should be less than or equal to the size of SET 2. In the relational model, given two attributes, the size of a set of values from one attribute that is associated with a value from the other attribute can be determined using functional dependency between the two attributes. Using these semantics, a distributed set query can be converted into a distributed nonset query. When the two sets are of size greater than one, however, the query cannot be converted into a nonset query. It is converted into another distributed set query. The size of data transmitted to answer the new query is reduced as much as possible. This is done by sending sets that satisfy the necessary condition of the set operation.<<ETX>>","PeriodicalId":389644,"journal":{"name":"Proceedings. PARBASE-90: International Conference on Databases, Parallel Architectures, and Their Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115485880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}