{"title":"Pulse radar detection using a multi-layer neural network","authors":"H. Kwan, C. K. Lee","doi":"10.1109/IJCNN.1989.118681","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118681","url":null,"abstract":"The application of a multilayer feedforward neural network to pulse radar detection or pulse compression is presented. For illustration, the Barker code was used. This network has 13 input units, 3 hidden units, and 1 output unit. Backpropagation learning was used to train the network. A 40-dB peak signal-to-noise ratio can be achieved easily. The processing time is expected to be much faster than that obtained using correlation and mismatched filtering approaches.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121819257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Biophysical basis, stability, and directional response characteristics of multiplicative lateral inhibitory neural networks","authors":"A. Bouzerdoum, R. B. Pinter","doi":"10.1109/IJCNN.1989.118458","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118458","url":null,"abstract":"Summary form only given, as follows. A directionally selective nonlinear lateral inhibitory neural network is proposed. The network originates from the biophysical mechanism of shunting inhibition. Its stability and some of its directional response characteristics are examined. The most significant property of this network is its differential response to stimuli moving in opposite directions. This directional response is found to vary with the mean input strength level, the size and speed of moving objects, as well as with coupling among elements of the network.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127232372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the cognitive function of deterministic chaos in neural networks","authors":"G. Basti, A. Perrone","doi":"10.1109/IJCNN.1989.118648","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118648","url":null,"abstract":"In neurophysiology experimental evidence has recently been produced suggesting that the different features of the external stimuli, processed in parallel along different pathways on the spatial dimension, are integrated dynamically on the temporal dimension. For this task, the deterministic chaos, experimentally found in the oscillatory behavior of nerve cell arrays of the sensory cortex, plays an essential role that is not yet clear from the theoretical standpoint. The authors propose a first approach to this problem. By the study of H. Sompolinsky's theoretical model of a neural net, which implements chaotic behavior in a dynamical Hopfield net, the authors show some properties of a chaotic net with respect to more classical models, such as the Rosenblatt perceptron, Hopfield net, and Boltzmann machine. At the same time, they advance theoretical research that links all these approaches. They suggest a first step toward the construction of a learning procedure founded on chaotic dynamics.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125888027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Preliminary results of applying neural networks to ship image recognition","authors":"D. Lee","doi":"10.1109/IJCNN.1989.118321","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118321","url":null,"abstract":"Summary form only given, as follows. A set of 39 pictures of four ship models in various positions was collected. The pictures were preprocessed to remove position and scale variations. In each picture (40*100 pixels) the ship image extended to both sides of the picture or from top to bottom. A subset of these pictures was used to train a large neural network (NN) using the generalized delta rule learning algorithm. The NN was tested on both the original images and simulated mirror images of the ships. When the maximum output from both presentations was used for making a classification decision, the NN successfully recognized the ships in all positions. It is observed that using first-layer weights initialized to zero produces faster learning and better performance than networks using only randomized weights.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123541477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A network with multi-partitioning units","authors":"Y. Tan, T. Ejima","doi":"10.1109/IJCNN.1989.118279","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118279","url":null,"abstract":"The authors propose a fuzzy partition model (FPM), a multilayer feedforward perceptron-like network. The most important point of FPM is that it has multiple-input/output units which are upper compatible with the threshold units commonly used in the backpropagation (BP) model. The number of outputs is called the degree N of that unit, and an FPM unit can classify input patterns into N categories. Because the sum total of the output values of an FPM unit is always one, Kullback divergence is adopted as a network measure to derive its learning rule. The fact that the learning rule does not include the derivative of a sigmoid function, which causes the convergence of the network to be slow, contributes to its fast learning ability. The authors applied FPM to some basic problems, and the results indicated the high potential of this model.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125564162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High precision position control by Cartesian trajectory feedback and connectionist inverse dynamics feedforward","authors":"D. Bassi, G. Bekey","doi":"10.1109/IJCNN.1989.118718","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118718","url":null,"abstract":"An optimal Cartesian trajectory determination coupled with a connectionist approach to perform the dynamics inversion is presented. This method uses a recurrent calculation of the optimal Cartesian trajectory function in order to drive the arm to the desired position and velocity in the desired time. Using this principle of dynamic optimality it is shown that it is possible to achieve the goal with an arbitrary precision even though the inverse dynamics transformation is only an approximation obtained by a neural network. The analysis of simulated control strategy shows that the relative position error for a start-stop movement follows a high inverse power law with respect to the number of feedback control steps. This result indicates that it is practical to control a manipulator to an arbitrary degree of precision by using a neural network whose transformation has a relatively low precision.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125577828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the use of neural networks and fuzzy logic in speech recognition","authors":"A. Amano, T. Aritsuka, N. Hataoka, A. Ichikawa","doi":"10.1109/IJCNN.1989.118595","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118595","url":null,"abstract":"A rule-based phoneme recognition method is proposed. This method uses neural networks for acoustic feature detection and fuzzy logic for the decision procedure. Rules for phoneme recognition are prepared for each pair of phonemes (pair-discrimination rules). Recognition experiments were performed using Japanese city names uttered by two male speakers. About 80% of the errors occurring in conventional template matching, which the discrimination rules were designed to recover, were in fact recovered (an improvement in recognition rate of 4.0 to 8.0%). This confirms the effectiveness of the proposed method.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126613132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deterministic networks for image estimation using a penalty function method","authors":"Anand Rangarajan, T. Simchony, R. Chellappa","doi":"10.1109/IJCNN.1989.118495","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118495","url":null,"abstract":"Summary form only given. A novel technique for image estimation which preserves discontinuities is presented. Gibbs distributions are used for image representations. These distributions also incorporate unobserved discontinuity variables or line processes. The degradation model is also Gibbs, which yields a posterior Gibbs distribution. The authors are interested in the maximum a posteriori (MAP) estimate. This reduces to finding the minimum of a Hamiltonian (energy function). The authors use a penalty function approach to solve the problem. This permits identifying the line processes as neurons with a graded response. The penalty function method also permits incorporating 'hard' and 'soft' constraints into the problem. These typically involve constraints on line endings, inhibition of adjacent parallel lines, preservation of line continuity of corners, etc. The authors propose two algorithms to solve this problem; the conjugate gradient (CG) and the iterated conditional mode (ICM) algorithms. Both algorithms are amenable to implementation on 'hybrid' networks.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114922565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural network algorithms for motion stereo","authors":"Y. Zhou, R. Chellappa","doi":"10.1109/IJCNN.1989.118707","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118707","url":null,"abstract":"Motion stereo infers depth information from a sequence of image frames. Both batch and recursive neural network algorithms for motion stereo are presented. A discrete neural network is used for representing the disparity field. The batch algorithm first integrates information from all images by embedding them into the bias inputs of the network. Matching is then carried out by neuron evaluation. This algorithm implements the matching procedure only once, unlike conventional batch methods requiring matching many times. The method uses a recursive least square algorithm to update the bias inputs of the network. The disparity values are uniquely determined by the neuron states after matching. Since the neural network can be run in parallel and the bias input updating scheme can be executed on line, a real-time vision system employing such an algorithm is very attractive. A detection algorithm for locating occluding pixels is also included. Experimental results using natural image sequences are given.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115018261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A sequential adder using recurrent networks","authors":"Fu-Sheng Tsung, G. Cottrell","doi":"10.1109/IJCNN.1989.118690","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118690","url":null,"abstract":"D.E. Rumelhart et al.'s proposal (1986) of how symbolic processing is achieved in PDP (parallel distributed processing) networks is tested by training two types of recurrent networks to learn to add two numbers of arbitrary lengths. A method of combining old and new training sets is developed which enables the network to learn and generalize with very large training sets. Through this model of addition, these networks demonstrated capability to do simple conditional branching, while loops, and sequences, mechanisms essential for a universal computer. Differences between the two types of recurrent networks are discussed, as well as implications for human learning.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115218645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}