{"title":"Behaviors of transform domain backpropagation (BP) algorithm","authors":"Xiahua Yang, P. Xue","doi":"10.1109/IJCNN.1991.170426","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170426","url":null,"abstract":"Several discrete orthogonal transforms have been used to study the behaviors of transform-domain backpropagation (BP) algorithms. Two examples of computer simulation show that, on selecting the appropriate parameters and the suitable structures of a neural network, the performance of the transform-domain BP algorithm is somewhat better than that of the original time-domain BP algorithm, regardless of which discrete orthogonal transform is applied. Among the transforms that have been used, the behaviors of the discrete cosine transform (DCT) and an alternative version of it are believed to be the best.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133721426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Pattern extraction and recognition for noisy images using the three-layered BP model","authors":"K. Imai, K. Gouhara, Y. Uchikawa","doi":"10.1109/IJCNN.1991.170414","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170414","url":null,"abstract":"The authors present a novel pattern recognition architecture using three-layered backpropagation (BP) models. The proposed architecture consists mainly of the following two completely separate functions: extraction of a target pattern and recognition of the extracted pattern. It is possible that the proposed architecture detects where and what the target pattern is. In order to realize these functions, the following networks are introduced: filtering network, position network, size network, frame-working network, and categorizing networks. Results of handwritten-letter recognition experiments show that the proposed architecture has the ability to recognize a deformed target pattern in an original image with much noise, especially lumped noises.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134040708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A parallel Kalman algorithm for fast learning of multilayer neural networks","authors":"C.-M. Cho, H.-S. Don","doi":"10.1109/IJCNN.1991.170644","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170644","url":null,"abstract":"A fast learning algorithm is proposed for training of multilayer feedforward neural networks, based on a combination of optimal linear Kalman filtering theory and error propagation. In this algorithm, all the information available from the start of the training process to the current training sample is exploited in the update procedure for the weight vector of each neuron in the network in an efficient parallel recursive method. This innovation is a massively parallel implementation and has better convergence properties than the conventional backpropagation learning technique. Its performance is illustrated on some examples, such as a XOR logical operation and a nonlinear mapping of two continuous signals.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131892497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic competitive learning for centroid estimation","authors":"S. Kia, G. Coghill","doi":"10.1109/IJCNN.1991.170507","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170507","url":null,"abstract":"Presents an analog version of an artificial neural network, termed a differentiator, based on a variation of the competitive learning method. The network is trained in an unsupervised fashion, and it can be used for estimating the centroids of clusters of patterns. A dynamic competition is held among the competing neurons in adaptation to the input patterns with the aid of a novel type of neuron called control neuron. The output of the control neurons provides feedback reinforcement signals to modify the weight vectors during training. The training algorithm is different from conventional competitive learning methods in the sense that all the weight vectors are modified at each step of training. Computer simulation results are presented which demonstrate the behavior of the differentiator in estimating the class centroids. The results indicate the high power of dynamic competitive learning as well as the fast convergence rates of the weight vectors.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134355715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Speaker-independent syllable recognition by a pyramidical neural net","authors":"Shulin Yang, Youan Ke, Zhong Wang","doi":"10.1109/IJCNN.1991.170712","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170712","url":null,"abstract":"The application of the pyramidical multilayered neural net to speaker-independent recognition of isolated Chinese syllables was investigated. The feature extraction algorithm is described. Experiments involving 90 speakers from 25 provinces of China show that accuracies of 82.7% and 87.1% can be achieved, respectively, for ten isolated digits and seven typical syllables, and an over 75% cross-sex recognition rate can be obtained. The results indicate that this neural net technique can be applied to speaker-independent syllable recognition and that its performance is comparable to that of the hidden Markov model method.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130336016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An enhancement to MLP model to enforce closed decision regions","authors":"R. Gemello, F. Mana","doi":"10.1109/IJCNN.1991.170486","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170486","url":null,"abstract":"Describes a modification of the basic MLP (multilayer perceptron) model implemented to improve its capability to enforce closed decision regions. The authors' proposal is to use hyperspheres instead of hyperplanes on the first hidden layer, and in turn combine them through the next layers. After training, the decision regions will be naturally closed because they are built on simple computational elements which will fire only if the pattern will fall in the hypersphere receptive fields. The training is achieved by applying a modification of the basic backpropagation error without use of ad-hoc algorithms. A two-dimensional example is reported.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"8 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130337609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Discovering production rules with higher order neural networks: a case study. II","authors":"A. Kowalczyk, H. Ferrá, K. Gardiner","doi":"10.1109/IJCNN.1991.170457","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170457","url":null,"abstract":"It is demonstrated by example that neural networks can be used successfully for automatic extraction of production rules from empirical data. The case considered is a popular public domain database of 8124 mushrooms. With the use of a term selection algorithm, a number of very accurate mask perceptrons (a kind of high-order network or polynomial classifier) have been developed. Then rounding of synaptic weights was applied, leading in many cases to networks with integer weights which were subsequently converted to production rules. It is also shown that focusing of network attention onto a smaller subset of useful attributes ordered with respect to their decreasing discriminating abilities helps significantly in accurate rule generation.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115194276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature selection for neural network recognition","authors":"T. Adachi, R. Furuya, Stephan Greene, K. Mikuriya","doi":"10.1109/IJCNN.1991.170481","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170481","url":null,"abstract":"Presents a system designed to help in the development of image recognition applications, using a general neural-network classifier and an algorithm for selecting effective image features given a small number of samples. Input to the system consists of a number of primitive image features computed directly from pixel values. The feature selection subsystem generates an image recognition feature vector by operations on the primitive features. It uses a combination of rule-based techniques and statistical heuristics to select the best features. The authors propose a quality statistic function which is based on sample values for each primitive feature. The parameters of this function were decided, and the authors experimented on several different target image groups using this function. Recognition rates were perfect in each case.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115636690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image transformation by spatial inhibition and local association","authors":"T. Omori","doi":"10.1109/IJCNN.1991.170472","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170472","url":null,"abstract":"The author proposes a model of image transformation that can modulate any unlearned object with a general transformation. That is, the transformation is independent of an object's shape. The local associative neural network model can transform a figure represented by a local feature set. The model transforms a figure satisfying constraints that are given as external inhibition and completing conditions that any figure should satisfy to be a reasonable shape. The basic methods are a figure representation with local features, feature transformation with spatial inhibition, and figure restoration with their interactions. With this model, one can realize an elemental function that will lead to a general figure transformation model without learning or experience.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115680237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FastProp: a selective training algorithm for fast error propagation","authors":"F. Wong","doi":"10.1109/IJCNN.1991.170635","DOIUrl":"https://doi.org/10.1109/IJCNN.1991.170635","url":null,"abstract":"An improved backpropagation algorithm, called FastProp, for training a feedforward neural network is described. The unique feature of the algorithm is the selective training which is based on the instantaneous causal relationship between the input and output signals during the training process. The causal relationship is calculated based on the error backpropagated to the input layers. The accumulated error, referred to as the accumulated error indices (AEIs), are used to rank the input signals according to their correlation relation with the output signals. An entire set of time series data can be clustered into several situations based on the current input signal which has the highest AEI index, and the neurons can be activated based on the current situations. Experimental results showed that a significant reduction in training time can be achieved with the selective training algorithm compared to the traditional backpropagation algorithm.<<ETX>>","PeriodicalId":211135,"journal":{"name":"[Proceedings] 1991 IEEE International Joint Conference on Neural Networks","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114490085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}