{"title":"反卷积的自适应步长动量法","authors":"Trung Vu, R. Raich","doi":"10.1109/SSP.2018.8450762","DOIUrl":null,"url":null,"abstract":"In this paper, we introduce an adaptive step size schedule that can significantly improve the convergence rate of momentum method for deconvolution applications. We provide analysis to show that the proposed method can asymptotically recover the optimal rate of convergence for first-order gradient methods applied to minimize smooth convex functions. In a convolution setting, we demonstrate that our adaptive schedule can be implemented efficiently without adding computational complexity to traditional gradient schemes.","PeriodicalId":330528,"journal":{"name":"2018 IEEE Statistical Signal Processing Workshop (SSP)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Adaptive Step Size Momentum Method For Deconvolution\",\"authors\":\"Trung Vu, R. Raich\",\"doi\":\"10.1109/SSP.2018.8450762\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper, we introduce an adaptive step size schedule that can significantly improve the convergence rate of momentum method for deconvolution applications. We provide analysis to show that the proposed method can asymptotically recover the optimal rate of convergence for first-order gradient methods applied to minimize smooth convex functions. In a convolution setting, we demonstrate that our adaptive schedule can be implemented efficiently without adding computational complexity to traditional gradient schemes.\",\"PeriodicalId\":330528,\"journal\":{\"name\":\"2018 IEEE Statistical Signal Processing Workshop (SSP)\",\"volume\":\"41 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-06-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE Statistical Signal Processing Workshop (SSP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSP.2018.8450762\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE Statistical Signal Processing Workshop (SSP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSP.2018.8450762","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Adaptive Step Size Momentum Method For Deconvolution
In this paper, we introduce an adaptive step size schedule that can significantly improve the convergence rate of momentum method for deconvolution applications. We provide analysis to show that the proposed method can asymptotically recover the optimal rate of convergence for first-order gradient methods applied to minimize smooth convex functions. In a convolution setting, we demonstrate that our adaptive schedule can be implemented efficiently without adding computational complexity to traditional gradient schemes.