多目标下降算法的收敛性研究

Martin Brown, Nicky Hutauruk
{"title":"多目标下降算法的收敛性研究","authors":"Martin Brown, Nicky Hutauruk","doi":"10.1109/MCDM.2007.369447","DOIUrl":null,"url":null,"abstract":"This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"39 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"On the Convergence of Multi-Objective Descent Algorithms\",\"authors\":\"Martin Brown, Nicky Hutauruk\",\"doi\":\"10.1109/MCDM.2007.369447\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms\",\"PeriodicalId\":306422,\"journal\":{\"name\":\"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making\",\"volume\":\"39 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MCDM.2007.369447\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCDM.2007.369447","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

研究了一类下降多目标优化算法的收敛路径、收敛速度和收敛半空间。一阶下降算法是通过最大化局部目标的约简来定义的,这种约简可以在原始空间(参数)或对偶空间(目标)中解释。结果表明,收敛路径通常与目标梯度的一个子集对齐,并且在极限情况下,收敛路径垂直于局部Pareto集。建立了一系列p范数下降算法的相似性和差异性。通过考虑一阶学习规则的稳定性,建立了收敛速度的界限。此外,还证明了多目标下降算法隐式地生成了一个半空间,该半空间定义了一类优化算法的收敛条件。在这个半空间中生成更新的任何过程都会收敛到局部Pareto集。这可以用来激励二阶算法的发展
本文章由计算机程序翻译,如有差异,请以英文原文为准。
On the Convergence of Multi-Objective Descent Algorithms
This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信