一种高效的正则化NLMF算法

S. M. U. Talha, Hammad Hussain, M. Moinuddin
{"title":"一种高效的正则化NLMF算法","authors":"S. M. U. Talha, Hammad Hussain, M. Moinuddin","doi":"10.1109/ICIAS.2016.7824143","DOIUrl":null,"url":null,"abstract":"Least-Mean Square algorithm, or simply the LMS, is generally one of the most commonly used techniques for optimized solution in adaptive schemes especially in the Gaussian environment. However, least mean fourth (LMF) algorithm and its variants, such as normalized LMF (NLMF), perform better in the non-Gaussian environment. The conventional LMF algorithms usually diverge in non-Gaussian environment with dynamic input. Conventionally, regularization is archived by using a small constant compensation term in the denominator of the learning rate to protect the algorithm from divergence. This paper introduces an efficient time-varying regularized normalized least mean fourth (R-NLMF) algorithm. In the proposed algorithm, the regularization term is made time varying and gradient adaptive according to steepest descent approach. Thus the proposed algorithm adapts its learning rate according to the environment and the input signal dynamics. A similar approach, namely Generalized Normalized Gradient Descent (GNGD) algorithm, has been previously applied to the normalized least mean-square (NLMS) algorithm in the Gaussian environment. However, due to its dependence on NLMS, the GNGD algorithm performance degrades in non-Gaussian environment. In order to overcome this problem, an efficient regularized NLMF algorithm for non-Gaussian environment is proposed. The algorithm shows promising results and achieves faster convergence while maintaining lesser steady-state miss-adjustment. Simulations are carried out to support the theoretical development.","PeriodicalId":247287,"journal":{"name":"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An efficient regularized NLMF algorithm\",\"authors\":\"S. M. U. Talha, Hammad Hussain, M. Moinuddin\",\"doi\":\"10.1109/ICIAS.2016.7824143\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Least-Mean Square algorithm, or simply the LMS, is generally one of the most commonly used techniques for optimized solution in adaptive schemes especially in the Gaussian environment. However, least mean fourth (LMF) algorithm and its variants, such as normalized LMF (NLMF), perform better in the non-Gaussian environment. The conventional LMF algorithms usually diverge in non-Gaussian environment with dynamic input. Conventionally, regularization is archived by using a small constant compensation term in the denominator of the learning rate to protect the algorithm from divergence. This paper introduces an efficient time-varying regularized normalized least mean fourth (R-NLMF) algorithm. In the proposed algorithm, the regularization term is made time varying and gradient adaptive according to steepest descent approach. Thus the proposed algorithm adapts its learning rate according to the environment and the input signal dynamics. A similar approach, namely Generalized Normalized Gradient Descent (GNGD) algorithm, has been previously applied to the normalized least mean-square (NLMS) algorithm in the Gaussian environment. However, due to its dependence on NLMS, the GNGD algorithm performance degrades in non-Gaussian environment. In order to overcome this problem, an efficient regularized NLMF algorithm for non-Gaussian environment is proposed. The algorithm shows promising results and achieves faster convergence while maintaining lesser steady-state miss-adjustment. Simulations are carried out to support the theoretical development.\",\"PeriodicalId\":247287,\"journal\":{\"name\":\"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)\",\"volume\":\"100 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIAS.2016.7824143\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 6th International Conference on Intelligent and Advanced Systems (ICIAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIAS.2016.7824143","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

最小均方算法(Least-Mean Square algorithm,简称LMS)是自适应方案中最常用的优化解决方案之一,特别是在高斯环境下。然而,最小平均四次(LMF)算法及其变体,如归一化LMF (NLMF),在非高斯环境下表现更好。传统的LMF算法在动态输入的非高斯环境下容易产生发散。通常,正则化是通过在学习率的分母中使用一个小的常数补偿项来归档的,以保护算法不发散。介绍了一种有效的时变正则化归一化最小平均四次方(R-NLMF)算法。该算法采用最陡下降法对正则化项进行时变和梯度自适应处理。因此,该算法可以根据环境和输入信号的动态变化来调整学习速率。一种类似的方法,即广义归一化梯度下降(GNGD)算法,已经被应用于高斯环境下的归一化最小均方(NLMS)算法。然而,由于对NLMS的依赖,GNGD算法在非高斯环境下性能下降。为了克服这一问题,提出了一种适用于非高斯环境的高效正则化NLMF算法。该算法取得了较好的效果,收敛速度较快,同时保持较小的稳态失调。为了支持理论的发展,进行了仿真。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
An efficient regularized NLMF algorithm
Least-Mean Square algorithm, or simply the LMS, is generally one of the most commonly used techniques for optimized solution in adaptive schemes especially in the Gaussian environment. However, least mean fourth (LMF) algorithm and its variants, such as normalized LMF (NLMF), perform better in the non-Gaussian environment. The conventional LMF algorithms usually diverge in non-Gaussian environment with dynamic input. Conventionally, regularization is archived by using a small constant compensation term in the denominator of the learning rate to protect the algorithm from divergence. This paper introduces an efficient time-varying regularized normalized least mean fourth (R-NLMF) algorithm. In the proposed algorithm, the regularization term is made time varying and gradient adaptive according to steepest descent approach. Thus the proposed algorithm adapts its learning rate according to the environment and the input signal dynamics. A similar approach, namely Generalized Normalized Gradient Descent (GNGD) algorithm, has been previously applied to the normalized least mean-square (NLMS) algorithm in the Gaussian environment. However, due to its dependence on NLMS, the GNGD algorithm performance degrades in non-Gaussian environment. In order to overcome this problem, an efficient regularized NLMF algorithm for non-Gaussian environment is proposed. The algorithm shows promising results and achieves faster convergence while maintaining lesser steady-state miss-adjustment. Simulations are carried out to support the theoretical development.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信