Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing最新文献

筛选
英文 中文
Time domain blind source separation of non-stationary convolved signals by utilizing geometric beamforming 基于几何波束形成的非平稳卷积信号时域盲源分离
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030056
R. Aichner, S. Araki, S. Makino, T. Nishikawa, H. Saruwatari
{"title":"Time domain blind source separation of non-stationary convolved signals by utilizing geometric beamforming","authors":"R. Aichner, S. Araki, S. Makino, T. Nishikawa, H. Saruwatari","doi":"10.1109/NNSP.2002.1030056","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030056","url":null,"abstract":"We propose a time-domain blind source separation (BSS) algorithm that utilizes geometric information such as sensor positions and assumed locations of sources. The algorithm tackles the problem of convolved mixtures by explicitly exploiting the non-stationarity of the acoustic sources. The learning rule is based on second-order statistics and is derived by natural gradient minimization. The proposed initialization of the algorithm is based on the null beamforming principle. This method leads to improved separation performance, and the algorithm is able to estimate long unmixing FIR filters in the time domain due to the geometric initialization. We also propose a post-filtering method for dewhitening which is based on the scaling technique in frequency-domain BSS. The validity of the proposed method is shown by computer simulations. Our experimental results confirm that the algorithm is capable of separating real-world speech mixtures and can be applied to short learning data sets down to a few seconds. Our results also confirm that the proposed dewhitening post-filtering method maintains the spectral content of the original speech in the separated output.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125416979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
Statistical descriptor of normality based on Hotelling's T/sup 2/ statistic and mixture of Gaussians 基于Hotelling的T/sup 2/统计量和混合高斯量的正态性统计描述符
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030052
A. Dolia
{"title":"Statistical descriptor of normality based on Hotelling's T/sup 2/ statistic and mixture of Gaussians","authors":"A. Dolia","doi":"10.1109/NNSP.2002.1030052","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030052","url":null,"abstract":"Novelty detection is an issue of primary importance as it can help to provide an improvement in the reliability of machine health monitoring. Novelty detection estimates the model of the normal operating regime or state and verifies whether new data is deviating from its normal operating regime. Feature extraction techniques using vibration data and novelty detection methods based on mixture of Gaussians (MoG), Chebyshev bound, Hotelling's statistic, and support vector machine (SVM) are discussed. A statistical descriptor of normality based on Hotelling's statistic and mixture of Gaussians is proposed. The performance of novelty detection algorithms based on the aforementioned techniques are analyzed for both real-life and artificial (real data with simulated load regime) vibration datasets. The proposed method demonstrates encouraging performance on real datasets with simulated load regime.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126665630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Facial expression analysis using shape and motion information extracted by convolutional neural networks 基于卷积神经网络提取形状和运动信息的面部表情分析
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030072
B. Fasel
{"title":"Facial expression analysis using shape and motion information extracted by convolutional neural networks","authors":"B. Fasel","doi":"10.1109/NNSP.2002.1030072","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030072","url":null,"abstract":"We discuss a neural networks-based face analysis approach that is able to cope with faces subject to pose and lighting variations. Especially head pose variations are difficult to tackle and many face analysis methods require the use of sophisticated normalization procedures. Data-driven shape and motion-based face analysis approaches are introduced that are not only capable of extracting features relevant to a given face analysis task, but are also robust with regard to translation and scale variations. This is achieved by deploying convolutional and time-delayed neural networks, which are either trained for face shape deformation or facial motion analysis.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131999748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Finding temporal structure in music: blues improvisation with LSTM recurrent networks 寻找音乐中的时间结构:用LSTM循环网络进行蓝调即兴创作
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030094
D. Eck, J. Schmidhuber
{"title":"Finding temporal structure in music: blues improvisation with LSTM recurrent networks","authors":"D. Eck, J. Schmidhuber","doi":"10.1109/NNSP.2002.1030094","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030094","url":null,"abstract":"We consider the problem of extracting essential ingredients of music signals, such as a well-defined global temporal structure in the form of nested periodicities (or meter). We investigate whether we can construct an adaptive signal processing device that learns by example how to generate new instances of a given musical style. Because recurrent neural networks (RNNs) can, in principle, learn the temporal structure of a signal, they are good candidates for such a task. Unfortunately, music composed by standard RNNs often lacks global coherence. The reason for this failure seems to be that RNNs cannot keep track of temporally distant events that indicate global music structure. Long short-term memory (LSTM) has succeeded in similar domains where other RNNs have failed, such as timing and counting and the learning of context sensitive languages. We show that LSTM is also a good mechanism for learning to compose music. We present experimental results showing that LSTM successfully learns a form of blues music and is able to compose novel (and we believe pleasing) melodies in that style. Remarkably, once the network has found the relevant structure, it does not drift from it: LSTM is able to play the blues with good timing and proper structure as long as one is willing to listen.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"197 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132329660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 245
Parallel and separable recursive Levenberg-Marquardt training algorithm 并行和可分离递归Levenberg-Marquardt训练算法
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030024
V. Asirvadam, S. McLoone, G. Irwin
{"title":"Parallel and separable recursive Levenberg-Marquardt training algorithm","authors":"V. Asirvadam, S. McLoone, G. Irwin","doi":"10.1109/NNSP.2002.1030024","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030024","url":null,"abstract":"A novel decomposed recursive Levenberg Marquardt (RLM) algorithm is derived for the training of feedforward neural networks. By neglecting interneuron weight correlations the recently proposed RLM training algorithm can be decomposed at neuron level enabling weights to be updated in an efficient parallel manner. A separable least squares implementation of decomposed RLM is also introduced. Experiment results for two nonlinear time series problems demonstrate the superiority of the new training algorithms.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133898027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
An efficient SMO-like algorithm for multiclass SVM 多类支持向量机的一种高效类smo算法
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030041
F. Aiolli, A. Sperduti
{"title":"An efficient SMO-like algorithm for multiclass SVM","authors":"F. Aiolli, A. Sperduti","doi":"10.1109/NNSP.2002.1030041","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030041","url":null,"abstract":"Starting from a reformulation of Cramer and Singer (see Journal of Machine Learning Research, vol.2, p.265-92, Dec. 2001) multiclass kernel machine, we propose a sequential minimal optimization (SMO) like algorithm for incremental and fast optimization of the Lagrangian. The proposed formulation allowed us to define very effective new pattern selection strategies which lead to better empirical results.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114889012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
A two-stage SVM architecture for predicting the disulfide bonding state of cysteines 半胱氨酸二硫键态预测的两阶段支持向量机结构
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030014
P. Frasconi, Andrea Passerini, A. Vullo
{"title":"A two-stage SVM architecture for predicting the disulfide bonding state of cysteines","authors":"P. Frasconi, Andrea Passerini, A. Vullo","doi":"10.1109/NNSP.2002.1030014","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030014","url":null,"abstract":"Cysteines may form covalent bonds, known as disulfide bridges, that have an important role in stabilizing the native conformation of proteins. Several methods have been proposed for predicting the bonding state of cysteines, either using local context or using global protein descriptors. In this paper we introduce an SVM based predictor that operates in two stages. The first stage is a multi-class classifier that operates at the protein level. The second stage is a binary classifier that refines the prediction by exploiting local context enriched with evolutionary information in the form of multiple alignment profiles. The prediction accuracy of the system is 83.6% measured by 5-fold cross validation, on a set of 716 proteins from the September 2001 PDB Select dataset.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128530424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Feature selection for off-line recognition of different size signatures 不同大小签名离线识别的特征选择
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030047
George D. C. Cavalcanti, Rodrigo C. Doria, E. C. B. C. Filho
{"title":"Feature selection for off-line recognition of different size signatures","authors":"George D. C. Cavalcanti, Rodrigo C. Doria, E. C. B. C. Filho","doi":"10.1109/NNSP.2002.1030047","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030047","url":null,"abstract":"The aim of this work is to select a set of features, which have good performance to solve the problem of signature recognition of different sizes. The signature database was formed for three sizes of signatures per user, small, median and big. This study uses structural features, pseudo-dynamic features and five moment groups. The feature selection method chosen is the one that select the best individual features based on classifiers like Bayes and k-NN.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130517999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Simple algorithms for decorrelation-based blind source separation 基于去相关的盲源分离的简单算法
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030066
S. Douglas
{"title":"Simple algorithms for decorrelation-based blind source separation","authors":"S. Douglas","doi":"10.1109/NNSP.2002.1030066","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030066","url":null,"abstract":"We present simple adaptive algorithms that perform blind source separation for spatially-independent and temporally-correlated source signals. The proposed algorithms are modified versions of a well-known natural gradient prewhitening scheme, and the simplest version has almost the same complexity as this prewhitening method. We provide a stationary point analysis of our schemes, proving that the only locally-stable stationary point results in separated sources with unit variances and a guaranteed output ordering. We also show how to modify the approaches so that joint subspace analysis and decorrelation-based source separation are performed. Simulations verify the separation capabilities of the schemes.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124978072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Analysis of support vector machines 支持向量机分析
Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing Pub Date : 2002-11-07 DOI: 10.1109/NNSP.2002.1030020
S. Abe
{"title":"Analysis of support vector machines","authors":"S. Abe","doi":"10.1109/NNSP.2002.1030020","DOIUrl":"https://doi.org/10.1109/NNSP.2002.1030020","url":null,"abstract":"We compare L1 and L2 soft margin support vector machines from the standpoint of positive definiteness, the number of support vectors, and uniqueness and degeneracy of solutions. Since the Hessian matrix of L2 SVM is positive definite, the number of support vectors for L2 SVM is larger than or equal to the number of L1 SVM. For L1 SVM, if there are plural irreducible sets of support vectors, the solution of the dual problem is non-unique although the primal problem is unique. Similar to L1 SVM, degenerate solutions, in which all the data are classified into one class, occur for L2 SVM.","PeriodicalId":117945,"journal":{"name":"Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115311129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信