N. Binenbaum, L. Dias, P. Hsieh, C.H. Ju, S. Markel, J. Pearson, H. Taylor
{"title":"Neural networks for signal/image processing using the Princeton Engine multi-processor","authors":"N. Binenbaum, L. Dias, P. Hsieh, C.H. Ju, S. Markel, J. Pearson, H. Taylor","doi":"10.1109/NNSP.1991.239481","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239481","url":null,"abstract":"The authors describe a modular neural network system for the removal of impulse noise from the composite video signal of television receivers, and the use of the Princeton Engine multi-processor for real-time performance assessment. This system out-performs alternative methods, such as median filters and matched filters. The system uses only eight neurons, and can be economically implemented in VLSI.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115057702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A time-derivative neural net architecture-an alternative to the time-delay neural net architecture","authors":"K. Paliwal","doi":"10.1109/NNSP.1991.239505","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239505","url":null,"abstract":"Though the time-delay neural net architecture has been recently used in a number of speech recognition applications, it has the problem that it can not use longer temporal contexts because this increases the number of connection weights in the network. This is a serious bottleneck because the use of larger temporal contexts can improve the recognition performance. In this paper, a time-derivative neural net architecture is proposed. This architecture has the advantage that it can utilize information about longer temporal contexts without increasing the number of connection weights in the network. This architecture is studied here for speaker-independent isolated-word recognition and its performance is compared with that of the time-delay neural net architecture. It is shown that the time-derivative neural net architecture, in spite of using less number of connection weights, outperforms the time-delay neural net architecture for speech recognition.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115091613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Supervised and unsupervised feature extraction from a cochlear model for speech recognition","authors":"N. Intrator, G. Tajchman","doi":"10.1109/NNSP.1991.239495","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239495","url":null,"abstract":"The authors explore the application of a novel classification method that combines supervised and unsupervised training, and compare its performance to various more classical methods. The authors first construct a detailed high dimensional representation of the speech signal using Lyon's cochlear model and then optimally reduce its dimensionality. The resulting low dimensional projection retains the information needed for robust speech recognition.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115765552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An outer product neural network for extracting principal components from a time series","authors":"L. E. Russo","doi":"10.1109/NNSP.1991.239525","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239525","url":null,"abstract":"An outer product neural network architecture has been developed based on subspace concepts. The network is trained by auto-encoding the input exemplars, and will represent the input signal by k-principal components, k being the number of neurons or processing elements in the network. The network is essentially a single linear layer. The weight matrix columns orthonormalize during training. The output signal converges to the projection of the input onto a k-principal component subspace, while the residual signal represents the novelty of the input. An application to extracting sinusoids from a noisy time series is given.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123698193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Carrato, G. Ramponi, A. Premoli, G. L. Sicuranza
{"title":"Improved structures based on neural networks for image compression","authors":"Sergio Carrato, G. Ramponi, A. Premoli, G. L. Sicuranza","doi":"10.1109/NNSP.1991.239492","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239492","url":null,"abstract":"The problem of efficient image compression through neural networks (NNs) is addressed. Some theoretical results on the application of 2-layer linear NNs to this problem are given. Two more elaborate structures, based on a set of NNs, are further presented; they are shown to be very efficient while remaining computationally rather simple.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125021003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural networks for sidescan sonar automatic target detection","authors":"M.J. LeBlanc, E. Manolakos","doi":"10.1109/NNSP.1991.239521","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239521","url":null,"abstract":"The goal of this research is to develop a multi-layer feedforward neural network architecture which can distinguish targets (in this case, mines) from background clutter in sidescan sonar images. The network is to be implemented on a hardware neurocomputer currently in development at CSDL, with the goal of eventual real-time performance in the field. A variety of neural network architectures are developed, simulated, and evaluated in an attempt to find the best approach for this particular application. It has been found that classical statistical feature extraction is outperformed by a much less computationally expensive approach that simultaneously compresses and filters the raw data by taking a simple mean.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121727167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Chakravarthy, Joydeep Ghosh, L. Deuser, S. Beck
{"title":"Efficient training procedures for adaptive kernel classifiers","authors":"S. Chakravarthy, Joydeep Ghosh, L. Deuser, S. Beck","doi":"10.1109/NNSP.1991.239539","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239539","url":null,"abstract":"The authors investigate two training schemes for adapting the locations and receptive field widths of the centroids in radial basis function classifiers. The adaptive kernel classifier is able to adjust the responses of the hidden units during training using an extension of the Delta rule, thus leading to improved performance and reduced network size. The rapid kernel classifier, on the other hand, uses the faster learned vector quantization algorithm to adapt the centroids. This network shows a remarkable reduction in training time with little compromise in accuracy. The performance of these two networks is evaluated using underwater acoustic transient signals.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123846880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Three-dimensional structured networks for matrix equation solving","authors":"Li-Xin Wang, J. Mendel","doi":"10.1109/NNSP.1991.239533","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239533","url":null,"abstract":"Structured networks are feedforward neural networks with linear neurons than use special training algorithms. Two three-dimensional (3-D) structured networks are developed for solving linear equations and the Lyapunov equation. The basic idea of the structured network approaches is: first, represent a given equation-solving problem by a 3-D structured network so that if the network matches a desired pattern array, the weights of the linear neurons give the solution to the problem; then, train the 3-D structured network to match the desired pattern array using some training algorithms; finally, obtain the solution to the specific problem from the converged weights of the network. The training algorithms for the two 3-D structured networks are proved to converge exponentially fast to the correct solutions.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129747353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tutorial: digital neurocomputing for signal/image processing","authors":"S. Kung","doi":"10.1109/NNSP.1991.239479","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239479","url":null,"abstract":"The requirements on both the computations and storage for neural networks are extremely demanding. Neural information processing would be practical only when efficient and high-speed computing hardware can be made available. The author reviews several approaches to architecture and implementation of neural networks for signal and image processing. The author discusses direct design of dedicated neural networks implemented by a variety of hardware technologies (e.g. CMOS, CCD), and introduces an indirect design approach based on matrix-based mapping methodology for systolic/wavefront array processor. The array processors mapping technique presented should be applicable to both programmable neurocomputer and dedicated digital or analog neural processing circuits. Several key general-purpose and system-oriented designs are surveyed. Key design examples of existing parallel processing neurocomputers are also discussed.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126976861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A simple word-recognition network with the ability to choose its own decision criteria","authors":"K.A. Fischer, H. Strube","doi":"10.1109/NNSP.1991.239496","DOIUrl":"https://doi.org/10.1109/NNSP.1991.239496","url":null,"abstract":"Various reliable algorithms for the word classification problem have been developed. All these models are necessarily based on the classification of certain 'features' that have to be extracted from the presented word. The general problem in speech recognition is: what kind of features are both word dependent as well as speaker independent? The majority of the existing systems requires a feature selection by the designer, so the system cannot choose the features that best fit the above mentioned criterion. Therefore, the authors tried to build a neural network that is able to rank all the features (here: the cells of the input layer) according to their functional relevance. This method reduces both the necessity to preselect the features as well as the numerical effort by a stepwise removal of the cells that proved to be unimportant.<<ETX>>","PeriodicalId":354832,"journal":{"name":"Neural Networks for Signal Processing Proceedings of the 1991 IEEE Workshop","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1991-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116508220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}