{"title":"Differentiable Programming based Step Size Optimization for LMS and NLMS Algorithms","authors":"K. Hayashi, Kaede Shiohara, Tetsuya Sasaki","doi":"10.1109/APSIPAASC47483.2019.9023175","DOIUrl":null,"url":null,"abstract":"We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneously by dynamically controlling the step size compared as a fix step size, however, in conventional variable step size approaches, the step size parameter has been controlled in rather heuristic manners. In this study, based on the concept of differential programming, we unfold the iterative process of LMS or NLMS algorithms, and obtain a multilayer signal-flow graph similar to a neural network, where each layer has a step size of each iteration of LMS or NLMS algorithm as an independent learnable parameter. Then, we optimize the step size parameters of all iterations by using a machine learning approach, such as the stochastic gradient descent. Numerical experiments demonstrate the performance of the proposed TLMS and TNLMS algorithms under various conditions.","PeriodicalId":145222,"journal":{"name":"2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APSIPAASC47483.2019.9023175","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose TLMS (Trainable Least Mean Squares) and TNLMS (Trainable Normalized LMS) algorithms, which use different step size parameter at each iteration determined by machine learning approach. It has been known that LMS algorithm can achieve fast convergence and small steady-state error simultaneously by dynamically controlling the step size compared as a fix step size, however, in conventional variable step size approaches, the step size parameter has been controlled in rather heuristic manners. In this study, based on the concept of differential programming, we unfold the iterative process of LMS or NLMS algorithms, and obtain a multilayer signal-flow graph similar to a neural network, where each layer has a step size of each iteration of LMS or NLMS algorithm as an independent learnable parameter. Then, we optimize the step size parameters of all iterations by using a machine learning approach, such as the stochastic gradient descent. Numerical experiments demonstrate the performance of the proposed TLMS and TNLMS algorithms under various conditions.