Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop最新文献

筛选
英文 中文
Connectionist acoustic word models 联结主义的声学词模型
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253697
Chuck Wooters, N. Morgan
{"title":"Connectionist acoustic word models","authors":"Chuck Wooters, N. Morgan","doi":"10.1109/NNSP.1992.253697","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253697","url":null,"abstract":"Other researchers have claimed significant improvements to their recognizers by using word models based on data-driven subphonetic units rather than traditional subword models. A possible advantage of this approach is that subphonetic models can be derived automatically from the data, so that the recognizer is trained to discriminate between acoustic categories. The authors describe some of the problems with the units that are derived from acoustic-phonetic considerations (when used for a hidden-Markov-model-based recognizer), and propose a novel technique for constructing acoustic word models using a multilayer perceptron (MLP). The authors are designing a subphonetic unit called the UNnone which is similar to fenones. A vector quantizer is used to partition the acoustic space into a set of clusters. Once the vector quantizer has been designed, the training vectors are compared to the reference vectors using a Euclidean distance measure. The label corresponding to the closest reference vector is assigned to the input vector. These labels are used as targets for training the MLP.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129555011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A recurrent neural network for nonlinear time series prediction-a comparative study 非线性时间序列预测的递归神经网络比较研究
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253659
S.S. Rao, S. Sethuraman, V. Ramamurti
{"title":"A recurrent neural network for nonlinear time series prediction-a comparative study","authors":"S.S. Rao, S. Sethuraman, V. Ramamurti","doi":"10.1109/NNSP.1992.253659","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253659","url":null,"abstract":"The performance of recurrent neural networks (RNNs) is compared with those of conventional nonlinear prediction schemes, such as a Kalman predictor (KP) based on a state-dependent model and a second-order Volterra filter. Simulation results on some typical nonlinear time series data indicate that the neural network can predict with accuracies on a par with the KP. It is noted that a higher-order extended Kalman filter or a Volterra model might provide a better performance than the ones considered. The network requires very few sweeps through the training data, though this will be computationally much more intensive than that required by conventional schemes. The authors discuss the advantages and drawbacks of each of the predictors considered.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126033598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A fast simulator for neural networks on DSPs or FPGAs 在dsp或fpga上的快速神经网络模拟器
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253652
M. Adé, Rudy Lauwereins, J. Peperstraete
{"title":"A fast simulator for neural networks on DSPs or FPGAs","authors":"M. Adé, Rudy Lauwereins, J. Peperstraete","doi":"10.1109/NNSP.1992.253652","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253652","url":null,"abstract":"The authors present a description of their achievements and current research on the implementation of a fast digital simulator for artificial neural networks. This simulator is mapped either on a parallel digital signal processor (DSP) or on a set of field programmable gate arrays (FPGAs). Powerful tools have been developed that automatically compile a graphical neural network description into executable code for the DSPs, with the flexibility to adjust weights and thresholds at run-time. The next step is to realize similar tools for the FPGAs.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"1021 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114837587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Some new results in nonlinear predictive image coding using neural networks 神经网络在非线性预测图像编码中的一些新成果
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253671
H. Li
{"title":"Some new results in nonlinear predictive image coding using neural networks","authors":"H. Li","doi":"10.1109/NNSP.1992.253671","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253671","url":null,"abstract":"The problem of nonlinear predictive image coding with multilayer perceptrons is considered. Some important aspects of coding, including the training of multilayer perceptrons, the adaptive scheme, and the robustness to the channel noise, are discussed in detail. Computer simulation results show that nonlinear predictors have better predictive performances than the linear DPCM. It is shown that the nonlinear predictor will produce smaller variance of predictive error than the linear predictor; that in the absence of the channel noise the nonlinear predictor can provide about a 3-dB improvement in signal-to-noise ratio over the linear one at the same transmission bit rate; and that, after being specially trained, the nonlinear predictor has a stronger robustness to the channel noise than the linear one.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128844722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Capacity control in classifiers for pattern recognition 模式识别分类器的容量控制
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253687
S. Solla
{"title":"Capacity control in classifiers for pattern recognition","authors":"S. Solla","doi":"10.1109/NNSP.1992.253687","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253687","url":null,"abstract":"Achieving good performance in statistical pattern recognition requires matching the capacity of the classifier to the size of the available training set. A classifier with too many adjustable parameters (large capacity) is likely to learn the training set without difficulty, but be unable to generalize properly to new patterns. If the capacity is too small, even the training set might not be learned without appreciable error. There is thus an intermediate, optimal classifier capacity which guarantees the best expected generalization for the given training set size. The method of structural risk minimization provides a theoretical tool for tuning the capacity of the classifier to this optimal match. It is noted that the capacity can be controlled through a variety of methods involving not only the structure of the classifier itself, but also properties of the input space that can be modified through preprocessing, as well as modifications of the learning algorithm which regularize the search for solutions to the problem of learning the training set. Experiments performed on a benchmark problem of handwritten digit recognition are discussed.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122070623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Training continuous density hidden Markov models in association with self-organizing maps and LVQ 结合自组织映射和LVQ的连续密度隐马尔可夫模型的训练
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253695
M. Kurimo, K. Torkkola
{"title":"Training continuous density hidden Markov models in association with self-organizing maps and LVQ","authors":"M. Kurimo, K. Torkkola","doi":"10.1109/NNSP.1992.253695","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253695","url":null,"abstract":"The authors propose a novel initialization method for continuous observation density hidden Markov models (CDHMMs) that is based on self-organizing maps (SOMs) and learning vector quantization (LVQ). The framework is to transcribe speech into phoneme sequences using CDHMMs as phoneme models. When numerous mixtures of, for example, Gaussian density functions are used to model the observation distributions of CDHMMs, good initial values are necessary in order for the Baum-Welch estimation to converge satisfactorily. The authors have experimented with constructing rapidly good initial values by SOMs, and with enhancing the discriminatory power of the phoneme models by adaptively training the state output distributions by using the LVQ algorithm. Experiments indicate that an improvement to the pure Baum-Welch and the segmentation K-means procedures can be obtained using the proposed method.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126557073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Pattern classification with a codebook-excited neural network 基于码本激励神经网络的模式分类
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253690
L. Wu, F. Fallside
{"title":"Pattern classification with a codebook-excited neural network","authors":"L. Wu, F. Fallside","doi":"10.1109/NNSP.1992.253690","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253690","url":null,"abstract":"A codebook-excited neural network (CENN) is formed by a multi-layer perceptron excited by a set of code vectors. The authors study its discriminant performance and compare it with other models. The performance improvement with the CENN is demonstrated in a number of cases. The CENN has been developed for classification. The multilayer codebook-excited feedforward neural network enhances the separability of patterns due to its nonlinear mapping and achieves a better discriminant performance than the single-layer one. The codebook-excited recurrent neural network exploits the dependent states among observations and forms a contextual compound classifier, which gives improved performance over ordinary classifiers.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116441187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-layer Kohonen neural network using a cochlear model as a front-end processor for a speech recognition system 采用耳蜗模型作为语音识别系统前端处理器的双层Kohonen神经网络
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253699
S. Lennon, E. Ambikairajah
{"title":"A two-layer Kohonen neural network using a cochlear model as a front-end processor for a speech recognition system","authors":"S. Lennon, E. Ambikairajah","doi":"10.1109/NNSP.1992.253699","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253699","url":null,"abstract":"The authors describe a two-layer neural network speech recognition system based on Kohonen's algorithm. A cochlear model is used as a front-end processor for the system. The basilar membrane is represented by a cascade of 128 digital filters, of which 90 filters fall within the speech bandwidth of 250 Hz to 4 kHz. The outputs of these 90 filters are presented as the input vector to the first layer of the Kohonen net every 16 ms. The input to the second layer consists of a concatenated vector, created from a trajectory of successively excited neurons, firing on the first layer. Sammon's nonlinear mapping algorithm was used as an analysis tool for measuring the effectiveness of different parts of the recognition process. The system was first simulated and later implemented on Inmos transputers.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125842736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Unsupervised sequence classification 无监督序列分类
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253694
J. Kindermann, C. Windheuser
{"title":"Unsupervised sequence classification","authors":"J. Kindermann, C. Windheuser","doi":"10.1109/NNSP.1992.253694","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253694","url":null,"abstract":"The authors first introduce a novel approach for unsupervised sequence classification, the competitive sequence learning (CSL) system. The CSL system consists of several extended Kohonen feature maps which are ordered in a hierarchy. The CSL maps develop a representation for subsequences during the training procedure, with an increasing abstraction on the higher maps. The authors apply their approach to real speech data and report preliminary results on a word recognition task. A generalization rate of 70% is achieved. The CSL system performs learning by listening: it divides the continuous sequence of input patterns into statistically relevant subsequences. This representation can be used to find appropriate subword models by means of a self-organizing neural network.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121553903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Generalized feedforward filters with complex poles 复极点广义前馈滤波器
Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop Pub Date : 1992-08-31 DOI: 10.1109/NNSP.1992.253662
T. Oliveira e Silva, P. Guedes de Oliveira, J. Príncipe, B. de Vries
{"title":"Generalized feedforward filters with complex poles","authors":"T. Oliveira e Silva, P. Guedes de Oliveira, J. Príncipe, B. de Vries","doi":"10.1109/NNSP.1992.253662","DOIUrl":"https://doi.org/10.1109/NNSP.1992.253662","url":null,"abstract":"The authors propose an extension to an existing structure, the gamma filter, replacing the real pole on the tap-to-tap transfer function with a pair of complex conjugate poles and a zero. The new structure is, like the gamma filter, an IIR filter with restricted feedback whose stability is trivial to check. While the gamma filter decouples the memory depth from the filter order for low-pass signals, the proposed structure decouples the memory depth and the central frequency from the filter order for band-pass signals. The learning equations of the model parameters are presented and shown to introduce an additive O(p) complexity to the backpropagation algorithm, where p is the filter order. The error surface for a linear filter is investigated in a system identification context, and the presence of local minima is confirmed. The performance of the proposed model was found to be better than that of the time-delay neural net in a nonlinear system identification context.<<ETX>>","PeriodicalId":438250,"journal":{"name":"Neural Networks for Signal Processing II Proceedings of the 1992 IEEE Workshop","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1992-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131324867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信