{"title":"Redundant time-frequency marginals for chirplet decomposition","authors":"L. Weruaga","doi":"10.1109/MLSP.2012.6349775","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349775","url":null,"abstract":"This paper presents the foundations of a novel method for chirplet signal decomposition. In contrast to basis-pursuit techniques on over-complete dictionaries, the proposed method uses a reduced set of adaptive parametric chirplets. The estimation criterion corresponds to the maximization of the likelihood of the chirplet parameters from redundant time-frequency marginals. The optimization algorithm that results from this scenario combines Gaussian mixture models and Huber's robust regression in an iterative fashion. Simulation results support the proposed avenue.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123210141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Trading approximation quality versus sparsity within incremental automatic relevance determination frameworks","authors":"D. Shutin, Thomas Buchgraber","doi":"10.1109/MLSP.2012.6349805","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349805","url":null,"abstract":"In this paper a trade-off between sparsity and approximation quality of models learned with incremental automatic relevance determination (IARD) is addressed. An IARD algorithm is a class of sparse Bayesian learning (SBL) schemes. It permits an intuitive and simple adjustment of estimation expressions, with the adjustment having a simple interpretation in terms of signal-to-noise ratio (SNR). This adjustment allows for implementing a trade-off between sparsity of the estimated model versus its accuracy in terms of residual mean-square error (MSE). It is found that this adjustment has a different impact on the IARD performance, depending on whether the measurement model coincides with the used estimation model or not. Specifically, in the former case the value of the adjustment parameter set to the true SNR leads to an optimum performance of the IARD with the smallest MSE and estimated signal sparsity; moreover, the estimated sparsity then coincides with the true signal sparsity. In contrast, when there is a model mismatch, the lower MSE can be achieved only at the expense of less sparser models. In this case the adjustment parameter simply trades the estimated signal sparsity versus the accuracy of the model.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133445918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Graphical methods for inequality constraints in marginalized DAGs","authors":"R. Evans","doi":"10.1109/MLSP.2012.6349796","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349796","url":null,"abstract":"We present a graphical approach to deriving inequality constraints for directed acyclic graph (DAG) models, where some variables are unobserved. In particular we show that the observed distribution of a discrete model is always restricted if any two observed variables are neither adjacent in the graph, nor share a latent parent; this generalizes the well known instrumental inequality. The method also provides inequalities on interventional distributions, which can be used to bound causal effects. All these constraints are characterized in terms of a new graphical separation criterion, providing an easy and intuitive method for their derivation.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125314388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Stochastic unfolding","authors":"Ke Sun, E. Bruno, S. Marchand-Maillet","doi":"10.1109/MLSP.2012.6349713","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349713","url":null,"abstract":"This paper proposes a nonlinear dimensionality reduction technique called Stochastic Unfolding (SU). Similar to Stochastic Neighbour Embedding (SNE), N input signals are first encoded into a N × N matrix of probability distribution(s) for subsequent learning. Unlike SNE, these probabilities are not to be preserved in the embedding, but to be deformed in the way that the embedded signals have less curvature than the original signals. The cost function is based on another type of statistical estimation instead of the commonly-used maximum likelihood estimator. Its gradient presents a Mexican-hat shape with local attraction and remote repulsion, which was used as a heuristic and is theoretically justified in this work. Experimental results compared with the state of art show that SU is good at preserving topology and performs best on datasets with local manifold structures.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115801948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Monzón, T. Trigano, D. Luengo, Antonio Artés-Rodríguez
{"title":"Sparse spectral analysis of atrial fibrillation electrograms","authors":"S. Monzón, T. Trigano, D. Luengo, Antonio Artés-Rodríguez","doi":"10.1109/MLSP.2012.6349721","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349721","url":null,"abstract":"Atrial fibrillation (AF) is a common heart disorder. One of the most prominent hypothesis about its initiation and maintenance considers multiple uncoordinated activation foci inside the atrium. However, the implicit assumption behind all the signal processing techniques used for AF, such as dominant frequency and organization analysis, is the existence of a single regular component in the observed signals. In this paper we take into account the existence of multiple foci, performing a spectral analysis to detect their number and frequencies. In order to obtain a cleaner signal on which the spectral analysis can be performed, we introduce sparsity-aware learning techniques to infer the spike trains corresponding to the activations. The good performance of the proposed algorithm is demonstrated both on synthetic and real data.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127735216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast design of efficient dictionaries for sparse representations","authors":"Cristian Rusu","doi":"10.1109/MLSP.2012.6349795","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349795","url":null,"abstract":"One of the central issues in the field of sparse representations is the design of overcomplete dictionaries with a fixed sparsity level from a given dataset. This article describes a fast and efficient procedure for the design of such dictionaries. The method implements the following ideas: a reduction technique is applied to the initial dataset to speed up the upcoming procedure; the actual training procedure runs a more sophisticated iterative expanding procedure based on K-SVD steps. Numerical experiments on image data show the effectiveness of the proposed design strategy.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114499746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Probabilistic interpolative decomposition","authors":"Ismail Ari, A. Cemgil, L. Akarun","doi":"10.1109/MLSP.2012.6349798","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349798","url":null,"abstract":"Interpolative decomposition (ID) is a low-rank matrix decomposition where the data matrix is expressed via a sub-set of its own columns. In this work, we propose a novel probabilistic method for ID where it is expressed as a statistical model within a Bayesian framework. The proposed method considerably differs from other ID methods in the literature: It handles the model selection automatically and enables the construction of problem-specific interpolative decompositions. We derive the analytical solution for the normal distribution and we provide a numerical solution for the generic case. Simulation results on synthetic data are provided to illustrate that the method converges to the true decomposition, independent of the initialization; and it can successfully handle noise. In addition, we apply probabilistic ID to the problem of automatic polyphonic music transcription to extract important information from a huge dictionary of spectrum instances. We supply comparative results with the other proposed techniques in the literature and show that it performs better. Probabilistic interpolative decomposition serves as a promising feature selection and de-noising tool to be exploited in big data problems.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127727802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luis Salamanca, J. J. Murillo-Fuentes, P. Olmos, F. Pérez-Cruz
{"title":"Tree-structured expectation propagation for LDPC decoding over the AWGN channel","authors":"Luis Salamanca, J. J. Murillo-Fuentes, P. Olmos, F. Pérez-Cruz","doi":"10.1109/MLSP.2012.6349716","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349716","url":null,"abstract":"In this paper, we propose the tree-structured expectation propagation (TEP) algorithm for low-density parity-check (LDPC) decoding over the additive white Gaussian noise (AWGN) channel. By imposing a tree-like approximation over the graphical model of the code, this algorithm introduces pairwise marginal constraints over pairs of variables, which provide joint information of the variables related. Thanks to this, the proposed TEP decoder improves the performance of the standard belief propagation (BP) solution. An efficient way of constructing the tree-like structure is also described. The simulation results illustrate the TEP decoder gain in the finite-length regime, compared to the standard BP solution. For code lengths shorter than n = 512, the gain in the waterfall region achieves up to 0.25 dB. We also notice a remarkable reduction of the error floor.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":" 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120830441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Online regularized discriminant analysis","authors":"U. Orhan, Ang Li, Deniz Erdoğmuş","doi":"10.1109/MLSP.2012.6349761","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349761","url":null,"abstract":"Learning the signal statistics and calibration are essential procedures for supervised machine learning algorithms. For some applications, e.g ERP based brain computer interfaces, it might be important to reduce the duration of the calibration, especially for the ones requiring frequent training of the classifiers. However simply decreasing the number of calibration samples would decrease the performance of the algorithm if the algorithm suffers from curse of dimensionality or low signal to noise ratio. As a remedy, we propose estimating the performance of the algorithm during the calibration in an online manner, which would allow us to terminate the calibration session if required. Consequently, early termination means a reduction in time spent. In this paper, we present an updating algorithm for regularized discriminant analysis (RDA) to modify the classifier using the new supervised data collected. The proposed procedure considerably reduces the time required for updating the RDA classifiers compared to recalibrating them, that would make the performance estimation applicable in real time.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"421 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116711398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A randomwalk based model incorporating social information for recommendations","authors":"Shang Shang, S. Kulkarni, P. Cuff, Pan Hui","doi":"10.1109/MLSP.2012.6349732","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349732","url":null,"abstract":"Collaborative filtering (CF) is one of the most popular approaches to build a recommendation system. In this paper, we propose a hybrid collaborative filtering model based on a Makovian random walk to address the data sparsity and cold start problems in recommendation systems. More precisely, we construct a directed graph whose nodes consist of items and users, together with item content, user profile and social network information. We incorporate user's ratings into edge settings in the graph model. The model provides personalized recommendations and predictions to individuals and groups. The proposed algorithms are evaluated on MovieLens and Epinions datasets. Experimental results show that the proposed methods perform well compared with other graph-based methods, especially in the cold start case.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"13 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114129924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}