{"title":"Temporal predicate transition nets and their applications","authors":"Xudong He","doi":"10.1109/CMPSAC.1990.139364","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139364","url":null,"abstract":"A new class of high-level Petri nets is defined, which is a combination of predicate transition nets and first order temporal logic. By combining these two formal methods, one can explicitly specify the structures and specify and verify various properties of parallel and distributed systems in the same framework, which cannot be achieved by using either one of the formal methods individually. Therefore, a more powerful methodology for the specification and the verification of parallel and distributed systems is obtained. The application of temporal predicate transition nets is illustrated through the specification and the verification of the five-dining-philosophers problem.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128400584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A program transformation approach to automating software re-engineering","authors":"Scott Burson, Gordon Kotik, L. Markosian","doi":"10.1109/CMPSAC.1990.139375","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139375","url":null,"abstract":"The authors describe a novel approach to software re-engineering that combines several technologies: object-oriented databases integrated with parser, for capturing the software to be re-engineered; specification and pattern languages for querying and analyzing a database of software; and transformation rules for automatically generating re-engineered code. The authors then describe REFINE, an environment for program representation, analysis, and transformation that provides the tools needed to implement the automation of software maintenance and re-engineering. The transformational approach is illustrated with examples taken from actual experience in re-engineering software in C, JCL and NATURAL. It is concluded that the ability to support automation in modifying large software systems by using rule-based program transformation is a key innovation of the present approach that distinguishes it from tools that focus only on automation of program analysis.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"69 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127983361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Diagnosis system for automatic detection of deadlock in asynchronous concurrent distributed computing systems: using timed Petri net with stacks","authors":"Jenn-Nan Chen, Peter Chen","doi":"10.1109/CMPSAC.1990.139456","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139456","url":null,"abstract":"The authors show how to use the timed Petri net with stacks (TPNS-net) to describe asynchronous concurrent distributed computing systems (DCS) which are based on the environment of loosely coupled computing systems. They also present methods for detecting types of DCS deadlocks such as cycle waiting, hold and wait, and exclusive access. It is shown that TPNS-net permits a process to request more than one resource at a time, express the dynamic state of the system, and increase the system parallelism.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"465 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116781966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Major technical issues in medical informatics computer technology systems and applications","authors":"I. F. Chang","doi":"10.1109/CMPSAC.1990.139416","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139416","url":null,"abstract":"Technical issues in medical informatics are addressed, with emphasis on tools for medical practitioners to willingly and effectively use computers to capture data and to access information; the conversion of paper records to electronic data to facilitate automation; and system and application integration based on patient medical documents and information. It is pointed out that the computer and communication technologies are sufficiently advanced to provide solutions to these problems. Practical solutions are discussed which use friendly computer user interface tools such as speech, gesture and handwriting recognition with a tablet or notepad computer. Painless transitions of paper record to image, to electronic form and to data, and an optical-disk-based patient medical document are also discussed.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115653824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High performance massively parallel abstract data type components","authors":"I. Yen, F. Bastani, T. Al-Marzooq, E. Leiss","doi":"10.1109/CMPSAC.1990.139351","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139351","url":null,"abstract":"An approach for designing high-performance ADT (abstract data type) components for massively parallel systems without sacrificing information hiding is presented. This approach merges information hiding clients and servers to achieve high communication bandwidth for transmitting requests and receiving responses. It uses multi-entry data structures, massive-state-transition interface operations, and a four-level decomposition approach to achieve both structured programming and information hiding within the ADT implementation. To facilitate the systematic design of various ADTs, they have been classified into three classes: unrelated, crystalline, and amorphous collections. The authors present general design decisions for each layer of each class of ADT and illustrate the theory with a detailed example from each class.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115699319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Real-time scheduling of multiple segment tasks","authors":"Kamhing Ho, James H. Rice, J. Srivastava","doi":"10.1109/CMPSAC.1990.139459","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139459","url":null,"abstract":"The authors study the problem of on-line non-preemptive scheduling of multiple segment real-time tasks. Task segments alternate between using CPU and I/O resources. A task model is proposed which encompasses a wider class of tasks than models proposed earlier. Instead of developing new scheduling algorithms, the authors develop a class of slack distribution policies which use varying degrees of information about task structure and device utilization to budget task slack. Slack distribution policies are shown to improve the performance of all scheduling algorithms studied. Two key observations are: slack distribution is helpful beyond a certain threshold of task arrival rate, and algorithms which normally perform poorly are helped to a greater degree by slack distribution. A study of various scheduling algorithms for a constant value function reveals that all of them favor tasks with a large number of small segments to tasks with a small number of large segments. It is shown that the Moore ordering algorithm is not optimal for multiple segment tasks.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128027756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A software approach to multiprocessor address trace generation","authors":"M. Azimi, C. Erickson","doi":"10.1109/CMPSAC.1990.139335","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139335","url":null,"abstract":"The authors describe a technique for generating architecture-independent multiprocessor data address traces on a widely available RISC (reduced instruction set computer) uniprocessor for a specific class of parallel applications. Automatic modification of the application assembly language enables run-time recording of the virtual address and data for loads and stores. Barrier synchronization events are captured in the traces. The tracing technique (called the Tracer) is relatively fast, portable, and does not require access to a multiprocessor. The generality of the traces and the slow-down by a factor of 10 when generating traces compares favourably with other address tracing methods. The Tracer has proved useful in the evaluation of a hierarchical shared bus multiprocessor. The Tracer can be used to gather statistics on programs for use in stochastic models such as queuing networks. Additionally, the visualization of memory access patterns that can be made with the traces is a useful tool in studying parallel applications on shared memory multiprocessors.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123065176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic query range for multikey searching","authors":"Xian-He Sun, N. Kamel","doi":"10.1109/CMPSAC.1990.139338","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139338","url":null,"abstract":"The use of range searching data structures for general multikey PROJECT-SELECT-JOIN queries is studied. A dynamic query range concept is introduced as a means for performing range searches in kd-trees when the search range contains multi-variable comparisons. A full implementation is described and test results are presented. Thus, through searching on the dynamic query ranges, the general PROJECT-SELECT-JOIN query implementation is facilitated in large databases.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123544054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantic and structural query reformulation for efficient manipulation of very large knowledge bases","authors":"Sang-goo Lee, Donghoon Shin","doi":"10.1109/CMPSAC.1990.139386","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139386","url":null,"abstract":"The authors present a framework for a knowledge base system that supports complex objects and two-level rules. By assuming that the portion of a rule base that is related to a query is small enough to fit in main memory, the bottleneck of the inference stage is not in unifying or managing complex objects but in identifying relevant rules for the query. However, efficient storage and manipulation of complex objects is critical in the physical database access stage where the fact base consists of large number of general objects. Consequently, the system has been divided into two virtually independent stages. An obvious, application of a two-level rule base is in semantic query optimization, where the integrity constraints will be the semantic rules and application of restrictions from them is optional. By supplying a number of special system predicates, the two-level rule base can be used to control the activities of the knowledge base. The two levels of rules naturally map to rules and meta rules in artificial intelligence applications.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124617518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Benchmarking two types of restricted transitive closure algorithms","authors":"Anestis A. Toptsis, Clement T. Yu, P. Nelson","doi":"10.1109/CMPSAC.1990.139387","DOIUrl":"https://doi.org/10.1109/CMPSAC.1990.139387","url":null,"abstract":"The authors present and evaluate two algorithms-one linear and one logarithmic-for the computation of the restricted transitive closure of a binary database relation. The algorithms are implemented in a relational database management system (Ingres), and on equipment which is fairly common in today's database application environments. The performance evaluation reveals three important points. First, unlike the case of the complete transitive closure computations where the linear (seminaive) method is outperformed by the logarithmic methods, in the computation of the restricted transitive closure the opposite is true. Second, contrary to the popular belief that the algorithms run faster if the size of the intermediate result relations is decreased by deleting excess data, the fastest algorithms are those which attempt to delete no data. Unless deletions can be handled efficiently, their potential benefits are overshadowed by the cost incurred to perform them. Third, the operations union and difference are established as being significantly more expensive than the join operation in these algorithms.<<ETX>>","PeriodicalId":127509,"journal":{"name":"Proceedings., Fourteenth Annual International Computer Software and Applications Conference","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128218116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}