{"title":"Artificial neural network for ECG arryhthmia monitoring","authors":"Y. Hu, W. Tompkins, Q. Xue","doi":"10.1109/NNSP.1992.253677","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253677","url":null,"abstract":"The application of a multilayer perceptron artificial neural network model (ANN) to detect the QRS complex in ECG (electrocardiography) signal processing is presented. The objective is to improve the heart beat detection rate in the presence of severe background noise. An adaptively tuned multilayer perceptron structure is used to model the nonlinear, time-varying background noise. The noise is removed by subtracting the predicted noise from the original signal. Preliminary experimental results indicate that the ANN based approach consistently outperforms the conventional bandpass filtering approach and the linear adaptive filtering approach. Such performance enhancement is most critical toward the development of a practical automated online ECG arrhythmia monitoring system.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124116728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised multi-level segmentation of multispectral images","authors":"R. A. Fernandes, M. Jernigan","doi":"10.1109/NNSP.1992.253676","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253676","url":null,"abstract":"The authors describe a scheme that performs multilevel segmentation of an image at many scales using a multiresolution texture representation. Each level uses anisotropic diffusion to segment a multispectral image at successively lower resolutions. Texture and statistical similarities between and within levels guides the diffusion process. The restriction of coarse-to-fine segmentation is removed, and one operates at all levels simultaneously. In this manner the labeling process can choose the scale or scales at which useful segments exist. The network outperforms a fuzzy clustering scheme in the segmentation of a high-resolution multispectral aerial image.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116840641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Application of frequency-domain neural networks to the active control of harmonic vibrations in nonlinear structural systems","authors":"T. J. Sutton, S. J. Elliott","doi":"10.1109/NNSP.1992.253665","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253665","url":null,"abstract":"The authors show how a nonlinear adaptive controller of quasi-neural architecture can be used to control harmonic vibrations even when it has to act through a nonlinear actuator element. The controller comprises a fixed nonlinearity to generate harmonics of the sinusoidal reference signal and a linear adaptive combiner. The coefficients in the adaptive combiner are adjusted using a steepest descent algorithm in which harmonic generation in the nonlinear system under control is taken into account. A neural model for this frequency domain description of a nonlinear system is discussed, and it is shown that using information derived from this model in the steepest descent algorithm amounts to backpropagating the error signal through the plant model.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116996773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Esposito, S. Rampone, C. Stanzione, R. Tagliaferri
{"title":"A mathematical model for speech processing","authors":"A. Esposito, S. Rampone, C. Stanzione, R. Tagliaferri","doi":"10.1109/NNSP.1992.253693","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253693","url":null,"abstract":"The authors develop a mathematical model of the mechanisms that the auditory apparatus uses for signal processing. They have studied the model of the peripheral auditory apparatus described by S. Seneff (1985, 1988). They complete it by adding new features such as a pitch detector and a neural synchrony detector module, by modifying some filter parameters, and by integrating it with the variations suggested by P. Cosi et al. (1990). Then, to validate the model, some experimental results are shown.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130719871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Inserting rules into recurrent neural networks","authors":"Colin Giles, C. Omlin","doi":"10.1109/NNSP.1992.253712","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253712","url":null,"abstract":"The authors present a method that incorporates a priori knowledge in the training of recurrent neural networks. This a priori knowledge can be interpreted as hints about the problem to be learned and these hints are encoded as rules which are then inserted into the neural network. The authors demonstrate the approach by training recurrent neural networks with inserted rules to learn to recognize regular languages from grammatical string examples. Because the recurrent networks have second-order connections, rule-insertion is a straightforward mapping of rules into weights and neurons. Simulations show that training recurrent networks with different amounts of partial knowledge to recognize simple grammers improves the training times by orders of magnitude, even when only a small fraction of all transitions are inserted as rules. In addition, there appears to be no loss in generalization performance.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126009460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fuzzy partition models and their effect in continuous speech recognition","authors":"Y. Kato, M. Sugiyama","doi":"10.1109/NNSP.1992.253702","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253702","url":null,"abstract":"Fuzzy partition models (FPMs) with multiple input-output units were applied to continuous speech recognition, and the use of automatic incremental training was evaluated. After initial training using word data, phrase recognition rates of 72.7% and 66.9% were obtained for an FPM and a TDNN (time-delay neural network), respectively. After incremental training, the phrase recognition rates improved to 86.3% and 78.4%, respectively. The FPMs provided more accurate segmentation after incremental training. The experiments determined that better phoneme segmentation provides greater improvement in phrase recognition. Incremental training also significantly improves recognition performance. As FPMs can be trained rapidly, various applications using large-scale training data are also possible.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122388755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Maximum mutual information training of a neural predictive-based HMM speech recognition system","authors":"K. Hassanein, L. Deng, M. Elmasry","doi":"10.1109/NNSP.1992.253696","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253696","url":null,"abstract":"A corrective training scheme based on the maximum mutual information (MMI) criterion is developed for training a neural predictive-based HMM (hidden Markov model) speech recognition system. The performance of the system on speech recognition tasks when trained with this technique is compared to its performance when trained using the maximum likelihood (ML) criterion. Preliminary results obtained indicate the superiority of ML training over MMI training for predictive-based models. This result is in agreement with earlier findings in the literature regarding direct classification models.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116052892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Learning of sinusoidal frequencies by nonlinear constrained Hebbian algorithms","authors":"J. Karhunen, J. Joutsensalo","doi":"10.1109/NNSP.1992.253709","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253709","url":null,"abstract":"The authors study certain unsupervised nonlinear Hebbian learning algorithms in the context of sinusoidal frequency estimation. If the nonlinearity is chosen suitably, these algorithm often perform better than linear Hebbian PCA subspace estimation algorithms in colored and impulsive noise. One of the algorithms seems to be able to separate the sinusoids from a noisy mixture input signal. The authors also derive another algorithm from a constrained maximization problem, which should be generally useful in extracting nonlinear features.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126019152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"NetMap-software tool for mapping neural networks onto parallel computers","authors":"K. Przytula, V. Prasanna, W.-M. Lin","doi":"10.1109/NNSP.1992.253653","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253653","url":null,"abstract":"The authors present a software tool for mapping neural network computations onto mesh-connected SIMD (single instruction stream multiple data stream) computers. The tool, called NetMap, contains complete mapping programs for built-in network models and a library of elemental routing routines. For built-in models the user is required to provide only minimum information about the network. For example, for a backpropagation network with fully connected layers, it is enough to provide the number of neurons in each layer of the network. If the network has been pruned or structured in some other way, so that the layers are no longer fully interconnected, a complete connectivity matrix needs to be provided. Thus, NetMap can be used for networks of arbitrary topology.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122436419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Dreyfus, O. Macchi, S. Marcos, O. Nerrand, L. Personnaz, P. Roussel-Ragot, D. Urbani, C. Vignat
{"title":"Adaptive training of feedback neural networks for non-linear filtering","authors":"G. Dreyfus, O. Macchi, S. Marcos, O. Nerrand, L. Personnaz, P. Roussel-Ragot, D. Urbani, C. Vignat","doi":"10.1109/NNSP.1992.253657","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253657","url":null,"abstract":"The authors propose a general framework which encompasses the training of neural networks and the adaptation of filters. It is shown that neural networks can be considered as general nonlinear filters which can be trained adaptively, i.e., which can undergo continual training. A unified view of gradient-based training algorithms for feedback networks is proposed, which gives rise to new algorithms. The use of some of these algorithms is illustrated by examples of nonlinear adaptive filtering and process identification.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"1 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125798873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}