Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization

Hussein A. Abbass
{"title":"Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization","authors":"Hussein A. Abbass","doi":"10.1109/CEC.2003.1299928","DOIUrl":null,"url":null,"abstract":"In this paper, we present a comparison between two multiobjective formulations to the formation of neuro-ensembles. The first formulation splits the training set into two nonoverlapping stratified subsets and form an objective to minimize the training error on each subset, while the second formulation adds random noise to the training set to form a second objective. A variation of the memetic Pareto artificial neural network (MPANN) algorithm is used. MPANN is based on differential evolution for continuous optimization. The ensemble is formed from all networks on the Pareto frontier. It is found that the first formulation outperformed the second. The first formulation is also found to be competitive to other methods in the literature.","PeriodicalId":416243,"journal":{"name":"The 2003 Congress on Evolutionary Computation, 2003. CEC '03.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2003-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2003 Congress on Evolutionary Computation, 2003. CEC '03.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CEC.2003.1299928","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

In this paper, we present a comparison between two multiobjective formulations to the formation of neuro-ensembles. The first formulation splits the training set into two nonoverlapping stratified subsets and form an objective to minimize the training error on each subset, while the second formulation adds random noise to the training set to form a second objective. A variation of the memetic Pareto artificial neural network (MPANN) algorithm is used. MPANN is based on differential evolution for continuous optimization. The ensemble is formed from all networks on the Pareto frontier. It is found that the first formulation outperformed the second. The first formulation is also found to be competitive to other methods in the literature.
Pareto神经进化:用多目标优化构造神经网络集合
在本文中,我们提出了两种多目标公式的形成神经系统的比较。第一种公式将训练集分成两个不重叠的分层子集,形成一个目标,使每个子集上的训练误差最小化,第二种公式在训练集上加入随机噪声,形成第二个目标。采用了模因帕累托人工神经网络(MPANN)算法的一种变体。MPANN是一种基于差分进化的连续优化算法。这个整体是由帕累托边境的所有网络组成的。结果表明,第一种配方优于第二种配方。第一种配方也被发现与文献中的其他方法具有竞争力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信