{"title":"Design of parallel distributed Cauchy machines","authors":"Yoshiyasu Takefuji, H. Szu","doi":"10.1109/IJCNN.1989.118629","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118629","url":null,"abstract":"A parallel and stochastic version of Hopfield-like neural networks is presented. Cauchy color noise is assumed. The specific noise is desirable for fast convergence to a fixed point representing a neighborhood minimum. It can be quickly quenched at each iteration according to a proven cooling schedule in generating random states on the energy landscape. An exact Cauchy acceptance criterion is analytically derived for hill-climbing capability. The improvement is twofold: a faster cooling schedule (the inversely linear cooling schedule characterized by the Cauchy simulated annealing) and parallel executions of all neurons. Such a Cauchy machine can be electronically implemented, and the design is given.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"216 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130726729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parallel distributed gradient descent and ascent methods","authors":"Yoshiyasu Takefuji","doi":"10.1109/IJCNN.1989.118349","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118349","url":null,"abstract":"Summary form only given, as follows. A parallel distributed processing architecture called an entropy machine (EM) is proposed. This machine, which is based on an artificial neural network composed of massive neurons and interconnections, is used for solving a variety of NP-complete optimization problems. The EM performs the parallel distributed gradient descent method or gradient ascent method to search for minima or maxima.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124765565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Self organizing networks with a split and merge algorithm","authors":"A. Kulkarni, G. Whitson","doi":"10.1109/IJCNN.1989.118548","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118548","url":null,"abstract":"Summary form only given, as follows. The authors present a novel learning algorithm for artificial neural networks based on the split and merge technique. The algorithm detects the similarity between the input patterns, and identifies the number of categories present in input samples. The algorithm is similar to the competitive learning algorithm; however, unlike in the competitive algorithm, the authors suggest two types of weights: long-term weights (LTWs) and short-term weights (STWs). The LTWs provide to the network the stability with respect to irrelevant input patterns, whereas the STWs provide the plasticity. The model with the split and merge algorithm has been developed and is used to categorize the pixels in the multispectral image based on the observed spectral signatures.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114669495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A modular neural network architecture for sequential paraphrasing of script-based stories","authors":"R. Miikkulainen, M. Dyer","doi":"10.1109/IJCNN.1989.118677","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118677","url":null,"abstract":"Sequential recurrent neural networks have been applied to a fairly high-level cognitive task, i.e. paraphrasing script-based stories. Using hierarchically organized modular subnetworks, which are trained separately and in parallel, the complexity of the task is reduced by effectively dividing it into subgoals. The system uses sequential natural language input and output and develops its own I/O representations for the words. The representations are stored in an external global lexicon and are adjusted in the course of training by all four subnetworks simultaneously, according to the FGREP-method. By concatenating a unique identification with the resulting representation, an arbitrary number of instances of the same word type can be created and used in the stories. The system is able to produce a fully expanded paraphrase of the story from only a few sentences, i.e. the unmentioned events are inferred. The word instances are correctly bound to their roles, and simple plausible inferences of the variable content of the story are made in the process.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130870208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Document classification using associative memories","authors":"W. Lin, C.-K. Tsao","doi":"10.1109/IJCNN.1989.118322","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118322","url":null,"abstract":"Summary form only given. An automated document classification system is presented which deals with documents containing horizontal lines and text. The system consists of two major components: the preprocessing module and the classification module. The preprocessing module digitizes input documents, scales them to a suitable level of resolution, and locates horizontal lines. The classification module is a heteroassociative memory that makes the decision on the type of input documents in the form of multiple line patterns. The experimental results using unidirectional linear associative memory show that the classification error rate is near zero. Discussions about employing the bidirectional associative memory in the classification module is also given.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122415827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Reinforcement learning algorithms as function optimizers","authors":"Ronald J. Williams","doi":"10.1109/IJCNN.1989.118683","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118683","url":null,"abstract":"Any nonassociative reinforcement learning algorithm can be viewed as a method for performing function optimization through (possibly noise-corrupted) sampling of function values. A description is given of the results of simulations in which the optima of several deterministic functions studied by D.H. Ackley (Ph.D. Diss., Carnegie-Mellon Univ., 1987) were sought using variants of REINFORCE algorithms. Results obtained for certain of these algorithms compare favorably to the best results found by Ackley.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115474969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Isabelle M Guyon, I. Poujaud, L. Personnaz, G. Dreyfus, J. Denker, Y. Le Cun
{"title":"Comparing different neural network architectures for classifying handwritten digits","authors":"Isabelle M Guyon, I. Poujaud, L. Personnaz, G. Dreyfus, J. Denker, Y. Le Cun","doi":"10.1109/IJCNN.1989.118570","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118570","url":null,"abstract":"An evaluation is made of several neural network classifiers, comparing their performance on a typical problem, namely handwritten digit recognition. For this purpose, the authors use a database of handwritten digits, with relatively uniform handwriting styles. The authors propose a novel way of organizing the network architectures by training several small networks so as to deal separately with subsets of the problem, and then combining the results. This approach works in conjunction with various techniques including: layered networks with one or several layers of adaptive connections, fully connected recursive networks, ad hoc networks with no adaptive connections, and architectures with second-degree polynomial decision surfaces.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124489571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sensory adaptation: an information-theoretic viewpoint","authors":"Mark D. Plumbley, F. Fallside","doi":"10.1109/IJCNN.1989.118395","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118395","url":null,"abstract":"Summary form only given. The authors examine the goals of early stages of a perceptual system, before the signal reaches the cortex, and describe its operation in information-theoretic terms. The effects of receptor adaptation, lateral inhibition, and decorrelation can all be seen as part of an optimization of information throughput, given that available resources such as average power and maximum firing rates are limited. The authors suggest a modification to Gabor functions which improves their performance as band-pass filters.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122742421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An implementation of the equilibrium trajectory hypothesis for movement generation in the arm","authors":"R. Shadmehr","doi":"10.1109/IJCNN.1989.118417","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118417","url":null,"abstract":"Summary form only given. A mathematical model of the neuromuscular system is built to describe some of the consequences of the equilibrium trajectory hypothesis (ETH) regarding the role of spinal control structures in movement. This model builds on the assumption that the spring-like reaction of the arm to small disturbances is mainly due to the length-tension properties of the muscles and not the length-dependent spinal reflexes. In order to explore point-to-point movements, a two-joint model of the arm is constructed, and its inverse dynamics are solved to predict movement trajectories for developed muscular forces. ETH suggests that movement is controlled by the central nervous system through gradual shifting of the arm's equilibrium point. A minimum jerk criterion function is used to define this virtual trajectory. An algorithm is suggested for assigning firing rates for a given virtual trajectory. To determine the role of the spinal reflexes, the model is tested in the case where no afferent information is available, so the virtual trajectory serves as the only source of neuromuscular activation.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116050950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hardware realisable models of neural processing","authors":"W. G. Chambers, T. Clarkson, D. Gorse, J. Taylor","doi":"10.1109/IJCNN.1989.118649","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118649","url":null,"abstract":"Summary form only given. An identity that has been recently established by D. Gorse and J.G. Taylor (Phys. Lett., vol.A131, p.326, 1988) between a certain class of neural model (originally proposed by Taylor) and a simple piece of electronic hardware, the probabilistic random-access memory (pRAM) holds out the possibility of mimicking physiological nets in hardware. The Taylor model has recently been extended to examine in more detail the pre- and postsynaptic processes that lead up to neural firing. The extended model retains the successful features of the original, but by operating at much shorter time scales (on the order of the lifetime of a quantum of neurotransmitter in the synaptic cleft) it allows higher order statistical information to be retrieved from the simulated spike train. It is capable of incorporating a great deal of biological detail, including effects associated with the mechanism of summation of postsynaptic potentials (PSPs), cell surface geometry, and axo-axonal interactions. Like its predecessor the new model has a straightforward hardware implementation as a pRAM, in which parameters relating to PSP summation and firing behavior can be changed by simply writing to the appropriate set of memory locations.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"4 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132063429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}