{"title":"Some techniques for shading machine renderings of solids","authors":"Arthur Appel","doi":"10.1145/1468075.1468082","DOIUrl":"https://doi.org/10.1145/1468075.1468082","url":null,"abstract":"Some applications of computer graphics require a vivid illusion of reality. These include the spatial organization of machine parts, conceptual architectural design, simulation of mechanisms, and industrial design. There has been moderate success in the automatic generation of wire frame, cardboard model, polyhedra, and quadric surface line drawings. The capability of the machine to generate vivid sterographic pictures has been demonstrated. There are, however considerable reasons for developing techniques by which line drawings of solids can be shaded, especially the enhancement of the sense of solidity and depth. Figures 1 and 2 illustrate the value of shading and shadow casting in spatial description. In the line drawing there is no clue as to the relative position of the flat plane and the sheet metal console. When shadows are rendered, it is clear that the plane is below and to the rear of the console, and the hollow nature of the sheet metal assembly is emphasized. Shading can specify the tone or color of a surface and the amount of light falling upon that surface from one or more light sources. Shadows when sharply defined tend to suggest another viewpoint and improves surface definition. When controlled, shading can also emphasize particular parts of the drawing. If techniques for the automatic determination of chiaroscuro with good resolution should prove to be competitive with line drawings, and this is a possibility, machine generated photographs might replace line drawings as the principal mode of graphical communication in engineering and architecture.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133257012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiprogramming, swapping and program residence priority in the FACOM 230-60","authors":"M. Tsujigado","doi":"10.1145/1468075.1468109","DOIUrl":"https://doi.org/10.1145/1468075.1468109","url":null,"abstract":"FACOM 230-60 is a large size electronic digital computer developed by Fujitsu Limited. The system consists of 1) up to 2 processing units, 2) a 256k word (maximum) high speed core memory that operates at a 0.92 μ sec. cycle time, or at an effective cycle time of 0.15 μ sec. with 16 memory banks and 3) a 768k word (maximum) low speed core memory that operates at a 6.0 μ sec. cycle time, or at an effective cycle time of 1.0 μ sec. with 6 memory banks.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"274 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115210637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Raffel, A. Anderson, T. Crowther, T. Herndon, C. Woodward
{"title":"A progress report on large capacity magnetic film memory development","authors":"J. Raffel, A. Anderson, T. Crowther, T. Herndon, C. Woodward","doi":"10.1145/1468075.1468114","DOIUrl":"https://doi.org/10.1145/1468075.1468114","url":null,"abstract":"In 1964 we proposed an approach to magnetic film memory development aimed at providing large, high-speed, low-cost random-access memories. Almost without exception, all early attempts at film memory design emphasized speed with little consideration for the potential of batch-fabrication to reduce costs. Based on our earlier work in building the first film memory in 1959, and a 1,000 word, 400 nsec model for the TX-2 computer in 1962, we had reached some fundamental conclusions about the compatibility of high speed and low cost for destructive-readout film memories.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124882747","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine-to-man communication by speech part 1: generation of segmental phonemes from text","authors":"F. Lee","doi":"10.1145/1468075.1468125","DOIUrl":"https://doi.org/10.1145/1468075.1468125","url":null,"abstract":"For many years man has been receiving messages from machines in printed form. Teletypes, computer console typewriters, high-speed printers and, more recently, character display oscilloscopes have become familiar in the role that they play in machine-to-man communication. Since most computers are now capable of receiving instructions from remote locations through ordinary telephone lines, it is natural that we ask whether with all of the sophistication that we have acquired in computer usage, we can communicate with the computer in normal speech. On the input of the computer, there is the automatic speech recognition problem, and at the output, the problem of speech synthesis from messages in text form. The problem of automatic speech recognition is substantially more difficult than the speech synthesis problem. While an automatic speech recognizer capable of recognizing connected speech from many individual speakers with essentially no restriction on the vocabulary is many years away, the generation of connected speech from text with similar restrictions on vocabulary is now well within our reach.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130054550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fast 21/2 D mass memory","authors":"C. Schuur","doi":"10.1145/1468075.1468115","DOIUrl":"https://doi.org/10.1145/1468075.1468115","url":null,"abstract":"The mass memory described in this paper is a randomly addressable magnetic core memory having a storage capacity of 0.5 Megabytes.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121142146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sorting networks and their applications","authors":"K. Batcher","doi":"10.1145/1468075.1468121","DOIUrl":"https://doi.org/10.1145/1468075.1468121","url":null,"abstract":"To achieve high throughput rates today's computers perform several operations simultaneously. Not only are I/O operations performed concurrently with computing, but also, in multiprocessors, several computing operations are done concurrently. A major problem in the design of such a computing system is the connecting together of the various parts of the system (the I/O devices, memories, processing units, etc.) in such a way that all the required data transfers can be accommodated. One common scheme is a high-speed bus which is time-shared by the various parts; speed of available hardware limits this scheme. Another scheme is a cross-bar switch or matrix; limiting factors here are the amount of hardware (an m × n matrix requires m × n cross-points) and the fan-in and fan-out of the hardware.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"6 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129193141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiprogramming system performance measurement and analysis","authors":"H. N. Cantrell, A. L. Ellison","doi":"10.1145/1468075.1468108","DOIUrl":"https://doi.org/10.1145/1468075.1468108","url":null,"abstract":"Why \" . . . design without evaluation usually is inadequate.\" \"Simulation... is applicable wherever we have a certain degree of understanding of the process to be simulated.\" \"The key to performance evaluation as well as to systems design is an understanding of what systems are and how they work.\" \"The purpose of measurment is insight, not numbers.\" Why should we spend time and money analyzing the performance of computer systems or computer programs? These systems or programs have been debugged. They work. They were designed for optimum performance by competent people who are just as convinced that their performance is optimum as they are that the program or system is logically correct. Why then should we analyze performance? There are three main reasons: 1. There may be performance bugs in a program. Performance bugs are the result of errors in evaluation or judgment on performance optimization. We have no reason to suspect that performance bugs are any less frequent or less serious than logical bugs. Thus, if the performance of a program or a system is important then it should be performance debugged by measurement and analysis. 2. If a new or better system or program is to be designed, then a good, quantitative understanding of the performance of previous systems is necessary to avoid performance bugs in the new design. 3. If an important program or system is intolerably slow, then the real reasons for its poor performance must be found by measurement and analysis. Otherwise time and money may be spent correcting many obvious but minor inefficiencies with no great effect on overall performance. Worse yet, the whole thing may be reimplemented with all key bugs preserved! In all three of these reasons the objective of performance analysis is to understand the unknown. We're looking for performance bugs. If we knew what these bugs were and what they cost in performance, then we wouldn't have to look for them. But because we are looking for unknown performance limiters, we don't know in advance what performance gains can be made by finding and fixing these bugs. We don't even know how hard or how easy it will be to fix these bugs after we find them. Thus, there is no way of predicting the performance payoff from the time and money spent in performance analysis. This time and money must be spent at risk, essentially on the basis of faith that the payoffs from performance analysis will exceed the value of any alternative way of spending this time and money. This is nothing new. This \"faith\" concept is well understood and accepted in the scientific and engineering communities. One of the purposes of this paper is to demonstrate that analysis pays off in programming to at least the same degree as in engineering and science.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133924494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Baylor Medical School teleprocessing system: operational time-sharing on a system/360 computer","authors":"William F. Hobbs, A. H. Levy, J. McBride","doi":"10.1145/1468075.1468080","DOIUrl":"https://doi.org/10.1145/1468075.1468080","url":null,"abstract":"The Baylor Teleprocessing System (BTS) is designed to operate as a time-sharing system. It accomplishes the following functions: 1. It allows several jobs initiated from various terminals to run concurrently with one batch job stream. 2. It permits the use of high-level languages for the construction of all programs, including those designed for remote terminals. 3. It insulates the user program from changes in the operating system by providing a set of macroinstructions and interface routines for input and output over telecommunication lines. 4. It provides certain utility functions for the terminal user, including the ability to build, alter, and retrieve data sets, and to communicate with the machine operator and other terminal users. 5. It provides a means by which programs originally written to run as batch jobs may be used from a remote terminal. 6. It insulates user programs from hardware errors originating during data transmission.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133981138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An error-correcting data link between small and large computers","authors":"S. Andreae, Robert W. Lafore","doi":"10.1145/1468075.1468093","DOIUrl":"https://doi.org/10.1145/1468075.1468093","url":null,"abstract":"The need for a data-link connecting small data-acquisition computers to a central computer with great analysis power arose in a particular context at the Lawrence Radiation Laboratory in Berkeley. Both the type of high-energy physics experiments being performed and the operation of the available large computer, a CDC 6600, posed unusual design problems.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122548865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Electrically alterable digital differential analyzer","authors":"G. P. Hyatt, Gene Ohlberg","doi":"10.1145/1468075.1468100","DOIUrl":"https://doi.org/10.1145/1468075.1468100","url":null,"abstract":"Computer simulations are performed for scientific problems using analog computer techniques for speed and functional similarity to the actual problem. The analog computers are severely limited in accuracy. This inherent accuracy limitation on high speed simulation requirements can be overcome by the application of the Teledyne Electrically Alterable Digital Differential Analyzer (TEADDA), which is a high speed completely parallel \"digital analog\" of an analog computer.","PeriodicalId":180876,"journal":{"name":"Proceedings of the April 30--May 2, 1968, spring joint computer conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1968-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128747596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}