{"title":"一种改进收敛特性的扩散稀疏RLS算法","authors":"B. K. Das, M. Chakraborty","doi":"10.1109/ISCAS.2016.7539138","DOIUrl":null,"url":null,"abstract":"A new sparsity aware recursive least squares (RLS) algorithm is proposed for distributed learning in a diffusion network. The algorithm deploys a RLS based adaptive filter at each node which is made sparsity aware by regularizing the conventional RLS cost function with a sparsity promoting penalty. The regularization introduces certain “zero-attracting” terms in the RLS update equation which help in shrinkage of the coefficients. Each node shares its tap weight information with every other node in its neighborhood and refines its own estimate by linearly combining the incoming tap weight information from neighboring nodes by a set of pre-defined weights. Results on both first and second order convergence of the algorithm are also provided. As simulations show, the proposed scheme outperforms other existing algorithms both in terms of convergence speed and steady state excess mean square error.","PeriodicalId":6546,"journal":{"name":"2016 IEEE International Symposium on Circuits and Systems (ISCAS)","volume":"22 1","pages":"2651-2654"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"A new diffusion sparse RLS algorithm with improved convergence characteristics\",\"authors\":\"B. K. Das, M. Chakraborty\",\"doi\":\"10.1109/ISCAS.2016.7539138\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A new sparsity aware recursive least squares (RLS) algorithm is proposed for distributed learning in a diffusion network. The algorithm deploys a RLS based adaptive filter at each node which is made sparsity aware by regularizing the conventional RLS cost function with a sparsity promoting penalty. The regularization introduces certain “zero-attracting” terms in the RLS update equation which help in shrinkage of the coefficients. Each node shares its tap weight information with every other node in its neighborhood and refines its own estimate by linearly combining the incoming tap weight information from neighboring nodes by a set of pre-defined weights. Results on both first and second order convergence of the algorithm are also provided. As simulations show, the proposed scheme outperforms other existing algorithms both in terms of convergence speed and steady state excess mean square error.\",\"PeriodicalId\":6546,\"journal\":{\"name\":\"2016 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"volume\":\"22 1\",\"pages\":\"2651-2654\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-05-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Symposium on Circuits and Systems (ISCAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISCAS.2016.7539138\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Symposium on Circuits and Systems (ISCAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISCAS.2016.7539138","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A new diffusion sparse RLS algorithm with improved convergence characteristics
A new sparsity aware recursive least squares (RLS) algorithm is proposed for distributed learning in a diffusion network. The algorithm deploys a RLS based adaptive filter at each node which is made sparsity aware by regularizing the conventional RLS cost function with a sparsity promoting penalty. The regularization introduces certain “zero-attracting” terms in the RLS update equation which help in shrinkage of the coefficients. Each node shares its tap weight information with every other node in its neighborhood and refines its own estimate by linearly combining the incoming tap weight information from neighboring nodes by a set of pre-defined weights. Results on both first and second order convergence of the algorithm are also provided. As simulations show, the proposed scheme outperforms other existing algorithms both in terms of convergence speed and steady state excess mean square error.