Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing最新文献

筛选
英文 中文
Removal of residual crosstalk components in blind source separation using LMS filters 利用LMS滤波器去除盲源分离中残留串扰分量
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030055
R. Mukai, S. Araki, H. Sawada, S. Makino
{"title":"Removal of residual crosstalk components in blind source separation using LMS filters","authors":"R. Mukai, S. Araki, H. Sawada, S. Makino","doi":"10.1109/NNSP.2002.1030055","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030055","url":null,"abstract":"The performance of blind source separation (BSS) using independent component analysis (ICA) declines significantly in a reverberant environment. The degradation is mainly caused by the residual crosstalk components derived from the reverberation of the jammer signal. This paper describes a post-processing method designed to refine output signals obtained by BSS. We propose a new method which uses LMS filters in the frequency domain to estimate the residual crosstalk components in separated signals. The estimated components are removed by non-stational spectral subtraction. The proposed method removes the residual components precisely, thus it compensates for the weakness of BSS in a reverberant environment. Experimental results using speech signals show that the proposed method improves the signal-to-interference ratio by 3 to 5 dB.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"591 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114628232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Linear input network for neural network automata model adaptation 线性输入网络对神经网络自动机模型的自适应
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030073
F. Mana, R. Gemello
{"title":"Linear input network for neural network automata model adaptation","authors":"F. Mana, R. Gemello","doi":"10.1109/NNSP.2002.1030073","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030073","url":null,"abstract":"The paper describes an experimental investigation of the applicability of linear input networks (LIN) as a channel and noise adaptation technique for an application of the Loquendo neural network based speech recognizer in a car environment. The considered application is an automated call center that provides traffic information through a voice dialogue system. The connection to the call center is achieved by means of a commercial device placed in the car and made up of a microphone which is placed in front of the driver and equipped with an echo canceller and built-in noise reduction. The connection with the call center is set up through a GSM link. By experiment, the LIN technique adapts the basic neural network speech recognizer to this new environment. Some variants devoted to reducing the number of estimated parameters are also introduced. The LIN technique, is also compared with some classical denoising techniques based on noise spectral subtraction. The obtained results confirm the validity of LIN for channel and noise adaptation, while the introduced variants are a valid alternative when a reduced model size is important. The best performances in our specific application were of 57.14% error reduction versus the performance obtained by general acoustic models and were achieved by joint use of a LIN and noise spectral subtraction.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121592456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decision templates for the classification of bioacoustic time series 生物声学时间序列分类决策模板
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030027
C. Dietrich, G. Palm, F. Schwenker
{"title":"Decision templates for the classification of bioacoustic time series","authors":"C. Dietrich, G. Palm, F. Schwenker","doi":"10.1109/NNSP.2002.1030027","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030027","url":null,"abstract":"The classification of time series is topic of this paper. In particular we discuss the combination of multiple classifier outputs with decision templates. The decision templates are calculated over a set of feature vectors which are extracted in local time windows. To learn characteristic classifier outputs of time series a set of decision templates is determined for the individual classes. We present algorithms to calculate multiple decision templates, and demonstrate the behaviour of this new approach on a real world data set from the field of bioacoustics.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116661502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Accurate estimation of the signal baseline in DNA chromatograms DNA色谱中信号基线的准确估计
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030015
L. Andrade, E. Manolakos
{"title":"Accurate estimation of the signal baseline in DNA chromatograms","authors":"L. Andrade, E. Manolakos","doi":"10.1109/NNSP.2002.1030015","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030015","url":null,"abstract":"Estimating accurately the varying baseline level in different parts of a DNA chromatogram is a challenging and important problem for accurate base-calling. We are formulating the problem in a statistical learning framework and propose an Expectation-Maximization algorithm for its solution. In addition we also present a faster, iterative histogram based method for estimating the background of the signal in small size windows. The two methods can be combined with regression techniques to correct the baseline in all regions of the chromatogram and are shown to work well even in areas of low SNR. By improving the separation of clusters, baseline correction actions reduce the classification errors when using the BEM base-caller developed in our group.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116797039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Minimax strategies for training classifiers under unknown priors 未知先验条件下分类器训练的极大极小策略
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030036
R. Alaíz-Rodríguez, Jesús Cid-Sueiro
{"title":"Minimax strategies for training classifiers under unknown priors","authors":"R. Alaíz-Rodríguez, Jesús Cid-Sueiro","doi":"10.1109/NNSP.2002.1030036","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030036","url":null,"abstract":"Most supervised learning algorithms are based on the assumption that the training data set reflects the underlying statistical model of the real data. However, this stationarity assumption is not always satisfied in practice: quite frequently, class prior probabilities are not in accordance with the class proportions in the training data set. The minimax approach is based on selecting the classifier that minimize the error probability under the worst case conditions. We propose a two-step learning algorithm to train a neural network in order to estimate the minimax classifier that is robust to changes in the class priors. During the first step, posterior probabilities based on training data priors are estimated. During the second step, class priors are modified in order to minimize a cost function that is asymptotically equivalent to the worst-case error rate. This procedure is illustrated on a softmax-based neural network. Several experimental results show the advantages of the proposed method with respect to other approaches.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123059085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Biometric watermark authentication with multiple verification rule 多验证规则的生物特征水印认证
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030071
T. Satonaka
{"title":"Biometric watermark authentication with multiple verification rule","authors":"T. Satonaka","doi":"10.1109/NNSP.2002.1030071","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030071","url":null,"abstract":"The paper describes a biometric watermarking procedure for accurate facial signature authentication. The multiple decision rules of watermark and facial signatures are formulated to map a pattern in an overlapping region between classes to a separable disjoint one. The decision rule of the watermark signature, which is uniquely assigned to a face image, reduces uncertainty in assuming the missing patterns of unknown intruder classes. The present algorithm incorporating multiple signatures improves a recognition error rate from 2.29% to 0.07%. The watermarking approach has attained robustness to various attacks and transformation by focusing on salient facial features.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123359400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Local minima effects on the transient performance of non-linear blind equalizers 局部极小值对非线性盲均衡器瞬态性能的影响
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030088
J. Destro-Filho
{"title":"Local minima effects on the transient performance of non-linear blind equalizers","authors":"J. Destro-Filho","doi":"10.1109/NNSP.2002.1030088","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030088","url":null,"abstract":"The computational requirements and the transient performance of several non-linear blind equalizers are compared in the case of transmission over linear and non-linear channels. The multilayer perceptron (MLP), the radial-basis-function network (RBF), the polynomial perceptron (PP) and two recently proposed non-linear structures (see Destro Filho, J.B., et al., Proc. GLOBECOM'96, p.196-200, 1996; Proc. GLOBECOM'99, 1999) are simulated. These equalizers are also compared to two classical benchmarks: the Volterra filter and Godard algorithm. A criterion for assessing the impact of parameter initialization (filter coefficients and synaptic weights) on the transient performance is proposed and evaluated. The results establish guidelines for choosing a particular non-linear blind equalizer when the trade-off between robustness to local minima problems and computational requirements must be satisfied.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120951380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Bayesian network based speech recognition with pitch and energy as auxiliary variables 以音高和能量为辅助变量的动态贝叶斯网络语音识别
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030075
T. A. Stephenson, J. Escofet, M. Magimai.-Doss, H. Bourlard
{"title":"Dynamic Bayesian network based speech recognition with pitch and energy as auxiliary variables","authors":"T. A. Stephenson, J. Escofet, M. Magimai.-Doss, H. Bourlard","doi":"10.1109/NNSP.2002.1030075","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030075","url":null,"abstract":"Pitch and energy are two fundamental features describing speech, having importance in human speech recognition. However, when incorporated as features in automatic speech recognition (ASR), they usually result in a significant degradation on recognition performance due to the noise inherent in estimating or modeling them. We show experimentally how this can be corrected by either conditioning the emission distributions upon these features or by marginalizing out these features in recognition. Since to do this is not obvious with standard hidden Markov models (HMMs), this work has been performed in the framework of dynamic Bayesian networks (DBNs), resulting in more flexibility in defining the topology of the emission distributions and in specifying whether variables should be marginalized out.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"528 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121535106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
A stochastic method for minimizing functions with many minima 具有多个极小值的函数最小化的随机方法
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030040
H. Ye, Zhiping Lin
{"title":"A stochastic method for minimizing functions with many minima","authors":"H. Ye, Zhiping Lin","doi":"10.1109/NNSP.2002.1030040","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030040","url":null,"abstract":"An efficient stochastic method for continuous optimization problems is presented. Combining a novel global search with typical local optimization methods, the proposed method specializes in hard optimization problems such as minimizing multimodal or ill-conditioned unimodal objective functions. Extensive numerical studies show that, starting from a random initial point, the proposed method is always to find the global optimal solution. Computational results in comparison with other global optimization algorithms clearly illustrate the efficiency and accuracy of the method. As traditional supervised neural-network training is formulated as a continuous optimization problem, the method presented can be applied to neural-network learning.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"168 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116118505","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Language model adaptation in speech recognition using document maps 基于文档地图的语音识别中的语言模型适应
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030074
K. Lagus, M. Kurimo
{"title":"Language model adaptation in speech recognition using document maps","authors":"K. Lagus, M. Kurimo","doi":"10.1109/NNSP.2002.1030074","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030074","url":null,"abstract":"We present speech experiments that were carried out to evaluate a topically focusing language model in large vocabulary speech recognition. An ordered topical clustering is first computed as a self-organized mapping of a large document collection. Language models are then trained for each text cluster or for several neighboring clusters. The obtained organized collection of language models is efficiently utilized in continuous speech recognition to concentrate on the model that corresponds closest to the current topic of discussion. The speech recognition experiments are carried out on a novel Finnish speech database. A property of Finnish that is particularly challenging for speech recognition is the extremely fast vocabulary growth that makes many of the standard word-based language modeling methods impractical for large vocabulary tasks.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127967047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信