{"title":"A geographical correlated data base system","authors":"B. Brinson, R. Cannon","doi":"10.1145/503643.503671","DOIUrl":"https://doi.org/10.1145/503643.503671","url":null,"abstract":"During the early summer of 1977, the Belle W. Baruch Institute for Marine Biology and Coastal Research at the University of South Carolina received a National Science Foundation grant to conduct research into the tidal fluxes of coastal estuaries. Designed to measure the transfer of nutrient and chemical material between the sea and coastal waters, the project will last for three years and will involve the collection and analysis of on-site data at the Institute's estuary near Georgetown, South Carolina. In conjunction with this research, it was also decided to conduct a feasibility study into the practicality of constructing a data base system which could contain all estuary-related biological data, as well as that data for this project. Initial requirements of such a system were set forth as follows: i. The data should be retrievable on an unqualified basis as well as on a specific basis. Interest ranged, for instance, in having the capability to retrieve all information on a specific type of chemical element contained in the data base as well as retrieving particular information on that element at a specific time, or at a specific location, or both. 2. As much as possible, all types of biologically related data should be accounted and planned for. The structure's overall design, however, must be capable of expansion as new types of information are collected. 3. The structure for logical storage must provide for a minimum of wasted space. This requirement is important when considering the fact that no one type of analysis results in the collection of all types of biological data. In fact, only a small fraction of these total types are collected at any one time. 4. The overall design of the data base should account for interaction and rates of exchange of energy between biological specimens, the forcing functions of an evironment, and chemical and nutrient quantities. This requirement, the ability to quantify and store dynamic changes of an estuarian environment, was of particular importance for the current NSF grant. 5. The method of storage should be relatively simple to understand in order that the inexperiienced user could also receive maximum benefit from the data base system and to make expansion a less tedious process. The basic design problem of the system was how to correlate all types of biological information into a useful relationship, as the manner in which data is managed in a computer system determines the degree to which user needs can be satisfied and also governs the efficiency of any information system. It was readily apparent that a single, large record structure affording storage for all biological data promised upwards of 80% wasted space per entry and should be avoided. This premise is supported by the constraints mentioned previously. The qualification of when and where data were collected eventually proved to be the best approach as, not only would it continue over time, but it readily correlated all types of diverse inf","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132301969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Balancing methods for binary search trees","authors":"J. Cannady","doi":"10.1145/503643.503684","DOIUrl":"https://doi.org/10.1145/503643.503684","url":null,"abstract":"Binary search trees have received a great deal of attention in recent years. As a result of this interest, several methods have been developed for balancing them; namely, random, height-balanced, bounded-balance, and weight-balanced trees. These methods which include weighted and non-weighted binary search trees are grouped into two classes: 1) dynamic balancing and 2) total restructuring. The rational and properties of the more significant methods are discussed and compared with other tree balancing algorithms. These comparisons provide insight about the conditions under which an algorithm is appropriate.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130231753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Techniques for the construction of small and fast lexical analyzers","authors":"J. Linn","doi":"10.1145/503643.503702","DOIUrl":"https://doi.org/10.1145/503643.503702","url":null,"abstract":"The paper discusses two major issues in the construction of table-driven lexical analyzers. It first examines an encoding of FSM state actions which allows the system to be truly table-driven with little or no program modification required to change the FSM being modeled. This encoding makes use of the knowledge that these actions are typically drawn from a reasonably small set. The second issue involves the storage of the \"next-state\" or transition table used by almost all general purpose scanning systems. A fortuitous encoding of FSM states can result in large savings in space with little cost in time. These techniques can be combined with the standard automata-theoretic approach to yield efficient analyzers.Previous results have shown that the speed of compilation is heavily influenced by the speed of the lexical analyzer. Therefore, these techniques could be used to improve the speed of new or already existing compilers.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123143475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A goodness of fit algorithm for empirical data","authors":"Ralph A. Bisland, E. Scheuermann","doi":"10.1145/503643.503650","DOIUrl":"https://doi.org/10.1145/503643.503650","url":null,"abstract":"This paper describes a computer program that fits empirical data to any of 10 theoretical distributions. Tests include the Chi-square, Kolmogorov-Smirnov and Moments Test. Of particular interest is an interface language which has been appended in order to ease the burden of input. This program has been used successfully in stochastic simulation and modeling courses for the past four years at the University of Southern Mississippi.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"575 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131212026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An information science simulator for epidemic disease","authors":"W. Stille","doi":"10.1145/503643.503708","DOIUrl":"https://doi.org/10.1145/503643.503708","url":null,"abstract":"The information science simulator for epidemic disease described below is a first step toward the total use of all relevant information to account for disease spread. The relevant information includes descriptive details over time, to any degree of resolution, about the envirm~nent, disease agents and persons involved in a particular epidemic; the potentially large volume of such detail requires the use of an information system. The relevant information also includes knowledge of the disease agent spread and disease manifestations, i.e., the interactions of agent, host and environment; this theory and associated interrelationships is generally represented algorithmically in the simulator and programmed to operate on the data base of observable details. In operation, the system produces realistic descriptive results which unfold in strict accord with the consequences of the assembled knowledge and detail. In th~s way simulations are obtained to symbol ically describe any aspect such as the health states of each person for each time period. The symbolic results may be summarized numerically or transformed to provide answers or evaluations to specific questions. By producing simulated results which correspond to actual outcome observations, the predictive validity of the system may be assessed by comparison of the abstract simulated results to the real results. Upon validation the system can be used to test or evaluate conditions and factors in disease spread by analytically replacing the real data or knowledge with experimental versions. Thus, information science simulation has the potential of adding a very powerful tool in exploration and analytical experimentation in realms of complex phenomena. Management sci~ence-operations research (MS-OR) modeling and simulation was orginated and has been further developed to provide a quantitative basis for operational decision making (i). Its distinguishing features include a focus on decision making, effectiveness measured by cost and the use of formal mathematical models. This approach has been used in health care delivery research and in such other health areas as finding optimal vaccination strategies (2). The mathematical model, or derivations of it, used in the vaccine studies is widely known as the Reed-Frost model (3). Objections have been raised to the Reed-Frost and related models where predictions are required because they are tautological (4). In the exploratory situations which precede decision making and in which the economics or relative costs are unknown, MS-OR is not especially useful. Certainly new relationships, theory and general knowledge developed by information science simulation could become useable subsequently by MS-OR for exploitation; hence, these two approaches are supplemental and not antagonistic. While the following description will also show their differences, it should be emphasized that the information science goal is understanding; it depends on an information system of ","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"9 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133684965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Use of a clinical laboratory computer to warn of possible drug interference with test results","authors":"W. E. Groves, Walter H. Gajewski","doi":"10.1145/503643.503686","DOIUrl":"https://doi.org/10.1145/503643.503686","url":null,"abstract":"Drugs can alter the concentration of important biological substances. This alteration can occur during an actual clinical laboratory test procedure for that substance. The frequent administration of drugs, coupled with the increasing reliance on accurate laboratory data, have led us to develop programs which will automatically warn a physician when test results may be altered due to pharmecuticals. Using a clinical laboratory computer system, software has been developed which checks to determine if the result of any test performed on a patient specimen may be affected by any drug administered to that same patient. If possible interference is detected, a comment is automatically attached to the result on the patient's computer-generated report warning the attending physician the result may be falsely elevated or lowered. These programs are run on a laboratory computer system whose central processing unit is a Control Data Corporation, model 1784, computer with 16-bit words and 256K bytes of central storage. All programs are written in Control Data Corporation FORTRAN IV. There are six related programs ranging in size from 560 to 4400 words requiring a total storage capacity of 12,400 words and three data files requiring a total of 213,600 words of storage. The first three programs provide the user with an alphabetized listing of all laboratory result reporting names and are divided to: 1) collect; 2) sort; 3) and print the data. The next program creates a drug/test interference data file which stores information on the effect of a given drug on a variety of laboratory tests. Since patient/drug information may derive from more than one source using different drug code numbers, a program was developed to create or update a data file which serves as a cross-reference index between the laboratory drug number and the pharmacy drug number(s). The final program allows for entry of patient identification and drug data, checks for drug/test interference and, if interference is detected, automatically attaches an appropriate comment to the result in the patient's file. All of these programs operate in a real-time environment.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114141188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The evolution of an FR-80 generated movie of opposed jet fluid flow","authors":"D. Elrod, J. Tunstall","doi":"10.1145/503643.503697","DOIUrl":"https://doi.org/10.1145/503643.503697","url":null,"abstract":"Computer generated graphics play an important role in the numerical modeling of natural phenomena, for example, turbulent and laminar fluid flow, heat transfer, stress analysis, and phase transition. The importance of graphical capabilities was recently reported by Tunstall and Elrod [i], when 35 mm slides generated by the FLOWPLOT [2] program were presented. The advantage of computer generated movies can be seen by the fact that with a movie, not only the data, but any trends within the data can be noted immediately. This paper discusses some of the ideas used by the authors to generate a movie on an FR-80 Graphics Recorder. Before the actual work of generating a movie began, the authors searched the local libraries for literature concerning computer generated movies. Documentation on two movie generating programs and several articles concerning documentary films were found, but there was no information on converting an existing computer graphics program into a movie producing program. The ideas that are presented here are the result of the authors' experiences in producing a movie from an existing plotting program, FLOWPLOT. FLOWPLOT was designed to be used with numerical fluid dynamics (or heat transfer with convection) codes to create velocity plots and/or pressure, density, and temperature contour plots [2]. Results of numerical experiments on a model of gas flow behavior in an opposing jet separation system were reported by J. N. Tunstall [3] using a movie generated by FLOWPLOT. In order to produce a movie by plotting directly on the film, the user must have available a high speed 16 mm pin-registered film plotter. Since the minimum movie projector speed is approximately 16 frames per second, the number of frames required for even a short film is significant. The number of frames that must be plotted to make a movie of a desired length is shown in Table i. Forty frames are drawn per foot of 16 mm film.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125227388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An \"introduction to computing\" experiment that failed","authors":"F. R. Norris","doi":"10.1145/503643.503678","DOIUrl":"https://doi.org/10.1145/503643.503678","url":null,"abstract":"In 1975 the University of North Carolina at Wilmington began offering an undergraduate degree program in Computer Science. At that time an experiment was begun to let students essentially choose their own introductory programming language. This was accomplished by having one language-independent lecture course and several accompanying language laboratories from which students could choose. How the implementation was carried out; its level of acceptance by majors, non-majors, and faculty; and its advantages and disadvantages are discussed.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125266373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Insertion in a relational data base defined on a non-redundant data model","authors":"Marc Oldham","doi":"10.1145/503643.503713","DOIUrl":"https://doi.org/10.1145/503643.503713","url":null,"abstract":"A compiler for a relational language, called MENTAT, has been described and implemented. This compiler takes the high level language of MENTAT and translates it into an intermediate language which allows a user to create, traverse, and obtain data from a physical data model. The data model is constructed to provide a level of non-redundancy within the elements of the relation. However, this structure posed significant problems when attempts were made to produce an insertion algorithm for it. This paper provides a detailed description of the insertion algorithm.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127728184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MUSICM: a music synthesis system","authors":"James F. Wirth","doi":"10.1145/503643.503679","DOIUrl":"https://doi.org/10.1145/503643.503679","url":null,"abstract":"During a performance the various instruments, musical parts, envelope generators, etc. are kept in step by a synchronizer routine which operates much like a discrete event simulator. The output to the synthesizer is buffered since the amount of channel activity can vary greatly. For example, considerable activity will take place on the I/0 port when all the instruments play a new note simultaneously.","PeriodicalId":166583,"journal":{"name":"Proceedings of the 16th annual Southeast regional conference","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1978-04-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127520546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}