R. A. Teixeira, A. P. Braga, R. Takahashi, R. R. Saldanha
{"title":"Decisor implementation in neural model selection by multiobjective optimization","authors":"R. A. Teixeira, A. P. Braga, R. Takahashi, R. R. Saldanha","doi":"10.1109/SBRN.2002.1181480","DOIUrl":null,"url":null,"abstract":"This work presents a new learning scheme for improving the generalization of multilayer perceptrons (MLPs). The proposed multiobjective algorithm approach minimizes both the sum of squared error and the norm of network weight vectors to obtain the Pareto-optimal solutions. Since the Pareto-optimal solutions are not unique, we need a decision phase (\"decisor\") in order to choose the best one as a final solution by using a validation set. The final solution is expected to balance network variance and bias and, as a result, generates a solution with high generalization capacity, avoiding over and under fitting.","PeriodicalId":157186,"journal":{"name":"VII Brazilian Symposium on Neural Networks, 2002. SBRN 2002. Proceedings.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2002-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"VII Brazilian Symposium on Neural Networks, 2002. SBRN 2002. Proceedings.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBRN.2002.1181480","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This work presents a new learning scheme for improving the generalization of multilayer perceptrons (MLPs). The proposed multiobjective algorithm approach minimizes both the sum of squared error and the norm of network weight vectors to obtain the Pareto-optimal solutions. Since the Pareto-optimal solutions are not unique, we need a decision phase ("decisor") in order to choose the best one as a final solution by using a validation set. The final solution is expected to balance network variance and bias and, as a result, generates a solution with high generalization capacity, avoiding over and under fitting.