{"title":"Hybrid distributed/local connectionist architectures","authors":"Tariq Samad","doi":"10.1109/IJCNN.1989.118344","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118344","url":null,"abstract":"Summary form only given, as follows. A class of neural network architectures is described that uses both distributed and local representation. The distributed representations are used for input and output, thereby enabling associative, noise-tolerant interaction with the environment. Internally, all representations are fully local. This simplifies weight assignment and makes the networks easy to configure for specific applications. These hybrid distributed/local architectures are especially useful for applications where structured information needs to be represented. Three such applications are briefly discussed: a scheme for knowledge representation, a connectionist rule-based system, and a knowledge-base browser.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114755910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A new back-propagation algorithm with coupled neuron","authors":"M. Fukumi, S. Omatu","doi":"10.1109/IJCNN.1989.118442","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118442","url":null,"abstract":"Summary form only given, as follows. A novel algorithm is developed for training multilayer fully connected feedforward networks of coupled neurons with both signoid and signum functions. Such networks can be trained by the familiar backpropagation algorithm since the coupled neuron (CONE) proposed uses the differentiable sigmoid function for its trainability. The algorithm is called CNR, or coupled neuron rule. The backpropagation (BP) and MRII algorithms which have both advantages and disadvantages have been developed earlier. The CONE takes advantages of the key ideas of both methods. By applying CNR to a simple network, it is shown that the convergence of the output error is much faster than that of the BP method when the variable learning rate is used. Finally, simulation results illustrate the effective learning algorithm.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130581579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel objective function for improved phoneme recognition using time delay neural networks","authors":"J. Hampshire, A. Waibel","doi":"10.1109/IJCNN.1989.118586","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118586","url":null,"abstract":"The authors present single- and multispeaker recognition results for the voiced stop consonants /b, d, g/ using time-delay neural networks (TDNN), a new objective function for training these networks, and a simple arbitration scheme for improved classification accuracy. With these enhancements a median 24% reduction in the number of misclassifications made by TDNNs trained with the traditional backpropagation objective function is achieved. This redundant results in /b, d, g/ recognition rates that consistently exceed 98% for TDNNs trained with individual speakers; it yields a 98.1% recognition rate for a TDNN trained with three male speakers.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124182205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Kampf, P. Koch, K. Roy, M. Sullivan, Z. Delalic, S. DasGupta
{"title":"Optimization of a digital neuron design","authors":"F. Kampf, P. Koch, K. Roy, M. Sullivan, Z. Delalic, S. DasGupta","doi":"10.1145/99633.99644","DOIUrl":"https://doi.org/10.1145/99633.99644","url":null,"abstract":"Summary form only given, as follows. Artificial neural network models, composed of many nonlinear processing elements operating in parallel, have been extensively simulated in software. The real estate required for neurons and their interconnections has been the major hindrance for hardware implementation. Therefore, a reduction in neuron size is highly advantageous. A digital neuron design consisting of an arithmetic logic unit (ALU) has been implemented to conform to the hard-limiting threshold function. Studies on reducing the ALU size, utilizing Monte-Carlo simulations, indicate that the effect of such a reduction on network reliability and efficiency is not detrimental. Neurons with reduced ALU size operate with the same computational abilities as full-size neurons.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115251387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Variants of self-organizing maps","authors":"J. Kangas, T. Kohonen, Jorma T. Laaksonen","doi":"10.1109/IJCNN.1989.118292","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118292","url":null,"abstract":"Self-organizing maps have a connection with traditional vector quantization. A characteristic which makes them resemble certain biological brain maps, however, is the spatial order of their responses which is formed in the learning process. Two innovations are discussed: dynamic weighting of the input signals at each input of each cell, which improves the ordering when very different input signals are used, and definition of neighborhoods in the learning algorithm by the minimum spanning tree, which provides a far better and faster approximation of prominently structured density functions. It is cautioned that if the maps are used for pattern recognition and decision processes, it is necessary to fine-tune the reference vectors such that they directly define the decision borders.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124865076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multitarget tracking with an optical neural net using a quadratic energy function","authors":"M. Yee, E. Barnard, D. Casasent","doi":"10.1117/12.969762","DOIUrl":"https://doi.org/10.1117/12.969762","url":null,"abstract":"Summary form only given, as follows. Multitarget tracking over consecutive pairs of time frames is accomplished with a neural net. This involves position and velocity measurements of the targets and a quadratic neural energy function. Simulation data are presented, and an optical implementation is discussed.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117072445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An electrically trainable artificial neural network (ETANN) with 10240 'floating gate' synapses","authors":"M. Holler, S. Tam, H. Castro, Robert G. Benson","doi":"10.1109/IJCNN.1989.118698","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118698","url":null,"abstract":"The use of floating-gate nonvolatile memory technology for analog storage of connection strengths, or weights, has previously been proposed and demonstrated. The authors report the analog storage and multiply characteristics of a new floating-gate synapse and further discuss the architecture of a neural network which uses this synapse cell. In the architecture described 8192 synapses are used to interconnect 64 neurons fully and to connect the 64 neurons to each of 64 inputs. Each synapse in the network multiplies a signed analog voltage by a stored weight and generates a differential current proportional to the product. Differential currents are summed on a pair of bit lines and transferred through a sigmoid function, appearing at the neuron output as an analog voltage. Input and output levels are compatible for ease in cascade-connecting these devices into multilayer networks. The width and height of weight-change pulses are calculated. The synapse cell size is 2009 mu m/sup 2/ using 1- mu m CMOS EEPROM technology.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1990-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neuroplanners for hand/eye coordination","authors":"D. H. Graf, W. LaLonde","doi":"10.1109/IJCNN.1989.118296","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118296","url":null,"abstract":"The authors generalize a previously described architecture, which they now call a neuroplanner, and apply it to an extension of the problem it was initially designed to solve-the target-directed control of a robot arm in an obstacle-cluttered workspace. By target directed they mean that the arm can position its end-effector at the point of gaze specified by a pair of stereo targetting cameras. Hence, the system is able to 'touch the point targetted by its eyes. The new design extends the targetting system to an articulated camera platform-the equivalent of the human eye-head-neck system. This permits the robot to solve the inverse problem: given the current configuration of the arm, the system is able to reorient the camera platform to focus on the end-effector. Because of obstacles, the camera platform will generally have to peer around obstacles that block its view. Hence the new system is able to move the eye-head-neck system to see the hand.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132586589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A backpropagation network for classifying auditory brainstem evoked potentials: input level biasing, temporal and spectral inputs and learning patterns","authors":"Dogan Alpsan, can Ozdamar","doi":"10.1109/IJCNN.1989.118422","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118422","url":null,"abstract":"Summary form only given, as follows. The results of an investigation conducted to examine the effects of various input data forms on learning of a neural network for classifying auditory evoked potentials are presented. The long-term objective is to use the classification in an automated device for hearing threshold testing. Feedforward multilayered neural networks trained with the backpropagation method are used. The effects of presenting the data to the neural network in various temporal and spectral modes are explored. Results indicate that temporal and spectral information complement one another and increase performance when used together. Learning curves and dot graphs as they are used in this study may reveal network learning strategies. The nature of such learning patterns found in this study is discussed.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127810920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A neuro-expert architecture for object recognition","authors":"J. Selinsky, A. Guez, J. Eilbert, M. Kam","doi":"10.1109/IJCNN.1989.118315","DOIUrl":"https://doi.org/10.1109/IJCNN.1989.118315","url":null,"abstract":"Summary form only given, as follows. A report is presented on results of experiments in object recognition with a combined neural network/expert system architecture (neuro-expert). The neuro-expert architecture is outlined with a description of the experimental object recognition system. Results are reported for the recognition of a 20-pattern prototype set of synthesized binary images placed at arbitrary rotations. A 100% recognition rate was obtained under noiseless conditions. Addition of 1% and 2% random pixel noise resulted in recognition rates of 95.2% and 89.5%, respectively.<<ETX>>","PeriodicalId":199877,"journal":{"name":"International 1989 Joint Conference on Neural Networks","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131760487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}