{"title":"Application of the recurrent neural network to the problem of language acquisition","authors":"R. Kamimura","doi":"10.1145/106965.105261","DOIUrl":"https://doi.org/10.1145/106965.105261","url":null,"abstract":"The purpose of this paper is to explore the possibility of langnage acquisition by using the recurrent neural net work. The knowledge of language that native speakers have is supposed to be reflected in the socalled “grammatical competence. ” Thus, the problem is to examine whether the recurrent neural network can acquire the grammatical competence. To simplify the experiments, the grammatical competence means the ability to infer the well-formedness of sentences. The training sentences are generated by the limited number of training sentences and the network must make judgments about the well-formedness of new sentences, The experimental results can be summarized as follows. First, the recurrent back-propagation needs only a few of propagations and back-propagations to obtain the necessary approximate values. Second, the recurrent network can infer the well-formedness of new sentences with sentence formulae of training sentences or new sentence formulae quite well. Third, the generalization performance of the network is not necessarily related to the number of hidden units. In some cases, we can obtain the best performance with no hidden units.","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127567913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Analysis of a biologically motivated neural network for character recognition","authors":"M. Garris, R. A. Wilkinson, Charles L. Wilson","doi":"10.1145/106965.106967","DOIUrl":"https://doi.org/10.1145/106965.106967","url":null,"abstract":"A neural network architecture for size-invariant and local shape-invariant digit recognition has been developed. The network is based on known biological data on the structure of vertebrate vision but is implemented using more conventional numerical methods for image feature extraction and pattern classification. The input receptor field structure of the network uses Gabor function feature selection. The classification section of the network uses back-propagation. Using these features as neurode inputs, an implementation of back-propagation on a serial machine achieved 100% accuracy when trained and tested on a single font size and style while classifying at a rate of 2 ms per character. Taking the same trained network, recognition greater than 99.9% accuracy was achieved when tested with digits of different font sizes. A network trained on multiple font styles when tested achieved greater than 99.9% accuracy and, when tested with digits of different font sizes, achieved greater than 99.8% accuracy. These networks, trained only with good quality prototypes, recognized images degraded with 15% random noise with an accuracy of 89%. In addition to raw recognition results, a study was conducted where activation distributions of correct responses from the network were compared against activation distributions of incorrect responses. By establishing a threshold between these two distributions, a reject mechanism was developed to minimize substitutional errors. This allowed substitutional errors on images degraded with 10% random noise to be reduced from 2.08% to 0.25%.","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122319519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supervised adaptive resonance networks","authors":"R. Baxter","doi":"10.1145/106965.126712","DOIUrl":"https://doi.org/10.1145/106965.126712","url":null,"abstract":"Adaptive Resonance Theory (ART) has been used to design a number of massively-parallel, unsupervised, pattern recognition machines. ART networks learn a set of recognition codes by ensuring that input vectors match or resonate with one of a learned set of template vectors. A novelty detector determines whether or not an input vector is new or familiar. Novel input vectors lead to the formation of new recognition codes. Most previous applications of ART networks involve unsupervised learning; i.e., no supervisory or teaching signals are used. However, in many applications it is desirable to have the network learn a mapping between input vectors and output vectors. Herein, extensions of ART networks to allow for supervised training are described. These extended networks can operate in a supervised or an unsupervised mode, and the networks autonomously switch between the two modes. h either mode, these networks develop a set of internal recognition codes in a self-organizing fashion. Since these net works are formulated aa a dynamical system, they are capable of operating in real time and it is not necessary to distinguish between learning and performance. When supervisory signals are absent, these networks predict the desired signal based on previous training. In this paper, in addition to reviewing several popular unsupervised ART networks, two types of extensions of ART networks into a supervised learning regime are discussed. The first type is applicable to problems in which only a unidirectional mapping from input vectors to output vectors is necessary. These supervised ART networks can solve nonlinear discrimination problems, and they can learn the exclusive-OR problem in a single trial. The second type of extension is designed to handle bidirectional mappings between pairs of vectors and is applicable to the more general bidirectional associative learning problem. Permission to copy without fee all or part of rhis material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publicatiort and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish requires a fee and/or specific permission. These extensions open applications of ART networks to a broad range of nonlinear mapping problems for which alternative networks, such aa multilayer perceptions trained via backpropagation, have been used in the past. The fact that these extended ART networks can learn nonlinearly-separable training sets in a single trial demonstrates that these networks are capable of much faster learning than other methods. Potential applications include optical character recognition, automatic target recognition, medical diagnosis, loan and insurance risk analysis, and learning associations between visual objects and their names. The application of supervised ART networks to two quite different ","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133165515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural control of a nonlinear system with inherent time delays","authors":"E. Rietman, R. Frye","doi":"10.1145/106965.105269","DOIUrl":"https://doi.org/10.1145/106965.105269","url":null,"abstract":"We have used a small robot arm to study the use of neural networka as adaptive controller and neural emulators. Our objectives were to investigate nonlinear systems that are accompanied by large time delays. Such systems can be difficult to control, since delays in feedback loops often give rise to instabilities. We have trained neural network emulators to simulate the operation of this system using a database of dynamic stimulus-response. Conventional methods of indirect learning -back-propagating errors through the emulator -to train an inverse kinematic feedforward controller do not work for such systems. Instead, it is necessary to provide the controller with the capability to anticipate future target trajectories. We present an example of such a controller, its function and performance in our prototypical system.","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116394151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Synthetic aperture radar image formation with neural networks","authors":"T. Frison, S. McCandless, Robert Renze","doi":"10.1145/106965.105262","DOIUrl":"https://doi.org/10.1145/106965.105262","url":null,"abstract":"This paper discusses the use of neural networks to perform synthetic aperture radar (SAR) azimuthal image generation. With a SAR, the positional geometry of a moving radar antenna can be related to the doppler shift of distributed (and possibly moving) targets on the surface. The cross-range image formation can be done with simple linear transforms and is not investigated. Digital matched filter processors require that a computer be programmed to perform sequential correlations between all expected variations of the return waveform and the actual radar return data. For the SAR processor, these operations must be performed for all positions of the antenna to form an image. Image formation is a computation intensive process that may take hours or days, depending on the size and complexity of the image. For example, the SEASAT satellite, launched in 1978, carried a L-band (1.25 Ghz) SAR for ocean imaging. Figure 1 is a SEASAT image of the Long Beach, California area. Only recently has all the data from this system been processed digitally. Interestingly, because digital technology was relatively primitive in the late 1970’s, SEASAT radar data was manipulated as analog data, The image formation was done with optical processors that use light beams and lenses to perform the transforms. These optical processors operate at the speed of light, therefore the image formation is near instantaneous. The image size, resolution, and duty cycle of the analog SEASAT is just now being matched by most “modem” digital data radars. When true large scale analog neural networks become available, SAR image formation could again become a mundane instantaneous operation. SAR processing of coherent complex signal histories is a good candidate for neural network","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The object-oriented paradigm and neurocomputing","authors":"P. Prueitt, R. Craig","doi":"10.1145/106965.105259","DOIUrl":"https://doi.org/10.1145/106965.105259","url":null,"abstract":"This paper develops the conjecture that the object-oriented paradigm is the best available modeling paradigm for creating compu’mtional models of the subsystems of the human brain. This conjecture ties together the two main themes of this paper, linking the object-oriented paradigm with neural networks via the dynamicat system and the emergence of a new interdisciplinary research field.","PeriodicalId":359315,"journal":{"name":"conference on Analysis of Neural Network Applications","volume":"39 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120974471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}