{"title":"Hidden Markov Models for detecting anomalous fish trajectories in underwater footage","authors":"C. Spampinato, S. Palazzo","doi":"10.1109/MLSP.2012.6349768","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349768","url":null,"abstract":"In this paper we propose an automatic system for the identification of anomalous fish trajectories extracted by processing underwater footage. Our approach exploits Hidden Markov Models (HMMs) to represent and compare trajectories. Multi-Dimensional Scaling (MDS) is applied to project the trajectories onto a low-dimensional vector space, while preserving the similarity between the original data. Usual or normal events are then defined as set of trajectories clustered together, on which HMMs are trained and used to check whether a new trajectory matches one of the usual events, or can be labeled as anomalous. This approach was tested on 3700 trajectories, obtained by processing a set of underwater videos with state-of-art object detection and tracking algorithms, by assessing its capability to distinguish between correct trajectories and erroneous ones due, for instance, to object occlusions, tracker mis-associations and background movements.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125421696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Landmine detection with Multiple Instance Hidden Markov Models","authors":"S. E. Yüksel, Jeremy Bolton, P. Gader","doi":"10.1109/MLSP.2012.6349734","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349734","url":null,"abstract":"A novel Multiple Instance Hidden Markov Model (MI-HMM) is introduced for classification of ambiguous time-series data, and its training is accomplished via Metropolis-Hastings sampling. Without introducing any additional parameters, the MI-HMM provides an elegant and simple way to learn the parameters of an HMM in a Multiple Instance Learning (MIL) framework. The efficacy of the model is shown on a real landmine dataset. Experiments on the landmine dataset show that MI-HMM learning is very effective, and outperforms the state-of-the-art models that are currently being used in the field for landmine detection.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123230533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised feature selection based on non-parametric mutual information","authors":"Lev Faivishevsky, J. Goldberger","doi":"10.1109/MLSP.2012.6349791","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349791","url":null,"abstract":"We present a novel filter approach to unsupervised feature selection based on the mutual information estimation between features. Our feature selection approach does not impose a parametric model on the data and no clustering structure is estimated. Instead, to measure the statistical dependence between features, we employ a mutual information criterion, which is computed by using a non-parametric method, and remove uncorrelated features. Numerical experiments on synthetic and real world tasks show that the performance of our algorithm is comparable to previously suggested state-of-the-art methods.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130041237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Exploiting graph embedding in support vector machines","authors":"Georgios Arvanitidis, A. Tefas","doi":"10.1109/MLSP.2012.6349736","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349736","url":null,"abstract":"In this paper we introduce a novel classification framework that is based on the combination of the support vector machine classifier and the graph embedding framework. In particular we propose the substitution of the support vector machine kernel with sub-space or sub-manifold kernels, that are constructed based on the graph embedding framework. Our technique combines the very good generalization ability of the support vector machine classifier with the flexibility of the graph embedding framework resulting in improved classification performance. The attained experimental results on several benchmark and real-life data sets, further support our claim of improved classification performance.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130226407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Cong, Guoxu Zhou, Qibin Zhao, Qiang Wu, A. Nandi, T. Ristaniemi, A. Cichocki
{"title":"Sequential nonnegative tucker decomposition on multi-way array of time-frequency transformed event-related potentials","authors":"F. Cong, Guoxu Zhou, Qibin Zhao, Qiang Wu, A. Nandi, T. Ristaniemi, A. Cichocki","doi":"10.1109/MLSP.2012.6349788","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349788","url":null,"abstract":"Tensor factorization has exciting advantages to analyze EEG for simultaneously exploiting its information in the time, frequency and spatial domains as well as for sufficiently visualizing data in different domains concurrently. Event-related potentials (ERPs) are usually investigated by the group-level analysis, for which tensor factorization can be used. However, sizes of a tensor including time-frequency representation of ERPs of multiple channels of multiple participants can be immense. It is time-consuming to decompose such a tensor. The low-rank approximation based sequential nonnegative Tucker decomposition (LraSNTD) has been recently developed and shown to be computationally efficient with respect to some benchmark datasets. Here, LraSNTD is applied to decompose a fourth-order tensor representation of ERPs. We find that the decomposed results from LraSNTD and a benchmark nonnegative Tucker decomposition algorithm are very similar. So, LraSNTD is promising for ERP studies.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123545022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Hierarchical sparse brain network estimation","authors":"A. Seghouane, Muhammad Usman Khalid","doi":"10.1109/MLSP.2012.6349756","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349756","url":null,"abstract":"Brain networks explore the dependence relationships between brain regions under consideration through the estimation of the precision matrix. An approach based on linear regression is adopted here for estimating the partial correlation matrix from functional brain imaging data. Knowing that brain networks are sparse and hierarchical, the l1-norm penalized regression has been used to estimate sparse brain networks. Although capable of including the sparsity information, the l1-norm penalty alone doesn't incorporate the hierarchical structure prior information when estimating brain networks. In this paper, a new l1 regularization method that applies the sparsity constraint at hierarchical levels is proposed and its implementation described. This hierarchical sparsity approach has the advantage of generating brain networks that are sparse at all levels of the hierarchy. The performance of the proposed approach in comparison to other existing methods is illustrated on real fMRI data.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128968145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the generalization ability of distributed online learners","authors":"Zaid J. Towfic, Jianshu Chen, A. H. Sayed","doi":"10.1109/MLSP.2012.6349778","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349778","url":null,"abstract":"We propose a fully-distributed stochastic-gradient strategy based on diffusion adaptation techniques. We show that, for strongly convex risk functions, the excess-risk at every node decays at the rate of O(1/Ni), where N is the number of learners and i is the iteration index. In this way, the distributed diffusion strategy, which relies only on local interactions, is able to achieve the same convergence rate as centralized strategies that have access to all data from the nodes at every iteration. We also show that every learner is able to improve its excess-risk in comparison to the non-cooperative mode of operation where each learner would operate independently of the other learners.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116724572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion of local degradation features for No-Reference Video Quality Assessment","authors":"Martin D. Dimitrievski, Z. Ivanovski","doi":"10.1109/MLSP.2012.6349737","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349737","url":null,"abstract":"We propose a blind/No-Reference Video Quality Assessment (NR-VQA) algorithm using models for visibility of local spatio-temporal degradations. The paper focuses on the specific degradations present in H.264 coded videos and their impact on perceived visual quality. Joint and marginal distributions of local wavelet coefficients are used to train Epsilon Support Vector Regression (ε-SVR) models for specific degradation levels in order to predict the overall subjective scores. Separate models for low/medium/high activity regions within the video frames are considered, inspired from the nature of H.264 coder behavior. Experimental results show that blind assessment of video quality is possible as the proposed algorithm output correlates highly with human perception of quality.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132558894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mahalanobis distance on Grassmann manifold and its application to brain signal processing","authors":"Y. Washizawa, S. Hotta","doi":"10.1109/MLSP.2012.6349723","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349723","url":null,"abstract":"Multi-dimensional data such as image patterns, image sequences, and brain signals, are often given in the form of the variance-covariance matrices or their eigenspaces to represent their own variations. For example, in face or object recognition problems, variations due to illuminations, camera angles can be represented by eigenspaces. A set of the eigenspaces is called the Grassmann manifold, and simple distance measurements in the Grassmann manifold, such as the projection metric have been used in conventional researches. However, in linear spaces, if the distribution of patterns is not isotropic, statistical distances such as the Mahalanobis distance are reasonable, and their performances are higher than simple distances in many problems. In this paper, we introduce the Mahalanobis distance in the Grassmann manifolds. Two experimental results, an object recognition problem and a brain signal processing, demonstrate the advantages of the proposed distance measurement.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120958146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An online learning algorithm for mixture models of deformable templates","authors":"F. Maire, S. Lefebvre, R. Douc, É. Moulines","doi":"10.1109/MLSP.2012.6349725","DOIUrl":"https://doi.org/10.1109/MLSP.2012.6349725","url":null,"abstract":"The issue addressed in this paper is the unsupervised learning of observed shapes. More precisely, we are aiming at learning the main features of an object seen in different scenarios. We adapt the statistical framework from [1] to propose a model in which an object is described by independent classes representing its variability. Our work consists in proposing an algorithm which learns each class characteristics in a sequential way: each new observation will improve our object knowledge. This algorithm is particularly well suited to real time applications such as shape recognition or classification, but turns out to be a challenging problem. Indeed, the so-called classic machine learning algorithms in missing data problems such as the Expectation Maximization algorithm (EM) are not designed to learn from sequentially acquired observations. Moreover, the so-called hidden data simulation in a mixture model can not be achieved in a proper way using the classic Markov Chain Monte Carlo (MCMC) algorithms, such as the Gibbs sampler. Our proposal, among other, takes advantage from the contribution of Cappé and Moulines [2] for a sequential adaptation of the EM algorithm and from the work of Carlin and Chib [3] for the hidden data posterior distribution simulation.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134413764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}