{"title":"Trading approximation quality versus sparsity within incremental automatic relevance determination frameworks","authors":"D. Shutin, Thomas Buchgraber","doi":"10.1109/MLSP.2012.6349805","DOIUrl":null,"url":null,"abstract":"In this paper a trade-off between sparsity and approximation quality of models learned with incremental automatic relevance determination (IARD) is addressed. An IARD algorithm is a class of sparse Bayesian learning (SBL) schemes. It permits an intuitive and simple adjustment of estimation expressions, with the adjustment having a simple interpretation in terms of signal-to-noise ratio (SNR). This adjustment allows for implementing a trade-off between sparsity of the estimated model versus its accuracy in terms of residual mean-square error (MSE). It is found that this adjustment has a different impact on the IARD performance, depending on whether the measurement model coincides with the used estimation model or not. Specifically, in the former case the value of the adjustment parameter set to the true SNR leads to an optimum performance of the IARD with the smallest MSE and estimated signal sparsity; moreover, the estimated sparsity then coincides with the true signal sparsity. In contrast, when there is a model mismatch, the lower MSE can be achieved only at the expense of less sparser models. In this case the adjustment parameter simply trades the estimated signal sparsity versus the accuracy of the model.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Workshop on Machine Learning for Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLSP.2012.6349805","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
In this paper a trade-off between sparsity and approximation quality of models learned with incremental automatic relevance determination (IARD) is addressed. An IARD algorithm is a class of sparse Bayesian learning (SBL) schemes. It permits an intuitive and simple adjustment of estimation expressions, with the adjustment having a simple interpretation in terms of signal-to-noise ratio (SNR). This adjustment allows for implementing a trade-off between sparsity of the estimated model versus its accuracy in terms of residual mean-square error (MSE). It is found that this adjustment has a different impact on the IARD performance, depending on whether the measurement model coincides with the used estimation model or not. Specifically, in the former case the value of the adjustment parameter set to the true SNR leads to an optimum performance of the IARD with the smallest MSE and estimated signal sparsity; moreover, the estimated sparsity then coincides with the true signal sparsity. In contrast, when there is a model mismatch, the lower MSE can be achieved only at the expense of less sparser models. In this case the adjustment parameter simply trades the estimated signal sparsity versus the accuracy of the model.