{"title":"An accelerated double-proximal gradient algorithm for DC programming","authors":"Gaoxi Li, Ying Yi, Yingquan Huang","doi":"10.1142/s0217595923500288","DOIUrl":null,"url":null,"abstract":"The double-proximal gradient algorithm (DPGA) is a new variant of the classical difference-of-convex algorithm (DCA) for solving difference-of-convex (DC) optimization problems. In this paper, we propose an accelerated version of the double-proximal gradient algorithm for DC programming, in which the objective function consists of three convex modules (only one module is smooth). We establish convergence of the sequence generated by our algorithm if the objective function satisfies the Kurdyka–[Formula: see text]ojasiewicz (K[Formula: see text]) property and show that its convergence rate is not weaker than DPGA. Compared with DPGA, the numerical experiments on an image processing model show that the number of iterations of ADPGA is reduced by 43.57% and the running time is reduced by 43.47% on average.","PeriodicalId":55455,"journal":{"name":"Asia-Pacific Journal of Operational Research","volume":"14 4","pages":"0"},"PeriodicalIF":1.1000,"publicationDate":"2023-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Asia-Pacific Journal of Operational Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0217595923500288","RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
The double-proximal gradient algorithm (DPGA) is a new variant of the classical difference-of-convex algorithm (DCA) for solving difference-of-convex (DC) optimization problems. In this paper, we propose an accelerated version of the double-proximal gradient algorithm for DC programming, in which the objective function consists of three convex modules (only one module is smooth). We establish convergence of the sequence generated by our algorithm if the objective function satisfies the Kurdyka–[Formula: see text]ojasiewicz (K[Formula: see text]) property and show that its convergence rate is not weaker than DPGA. Compared with DPGA, the numerical experiments on an image processing model show that the number of iterations of ADPGA is reduced by 43.57% and the running time is reduced by 43.47% on average.
期刊介绍:
The Asia-Pacific Journal of Operational Research (APJOR) provides a forum for practitioners, academics and researchers in Operational Research and related fields, within and beyond the Asia-Pacific region.
APJOR will place submissions in one of the following categories: General, Theoretical, OR Practice, Reviewer Survey, OR Education, and Communications (including short articles and letters). Theoretical papers should carry significant methodological developments. Emphasis is on originality, quality and importance, with particular emphasis given to the practical significance of the results. Practical papers, illustrating the application of Operation Research, are of special interest.