R. A. Teixeira, A. Braga, R. Takahashi, R. R. Saldanha
{"title":"人工神经网络训练的多目标优化方法","authors":"R. A. Teixeira, A. Braga, R. Takahashi, R. R. Saldanha","doi":"10.1109/SBRN.2000.889733","DOIUrl":null,"url":null,"abstract":"Presents a learning scheme for training multilayer perceptrons (MLPs) with improved generalization ability. The algorithm employs a training algorithm based on a multi-objective optimization mechanism. This approach allows balancing between the training squared error and the norm of the network weight vector. This balancing is correlated with the trade-off between overfitting and underfitting. The method is applied to classification and regression problems and also compared with weight decay, support vector machines and standard backpropagation results. The proposed method leads to training results that are the best ones, and additionally allows a systematic procedure for training neural networks, with less heuristic parameter adjustments than the other methods.","PeriodicalId":448461,"journal":{"name":"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2000-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A multi-objective optimization approach for training artificial neural networks\",\"authors\":\"R. A. Teixeira, A. Braga, R. Takahashi, R. R. Saldanha\",\"doi\":\"10.1109/SBRN.2000.889733\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Presents a learning scheme for training multilayer perceptrons (MLPs) with improved generalization ability. The algorithm employs a training algorithm based on a multi-objective optimization mechanism. This approach allows balancing between the training squared error and the norm of the network weight vector. This balancing is correlated with the trade-off between overfitting and underfitting. The method is applied to classification and regression problems and also compared with weight decay, support vector machines and standard backpropagation results. The proposed method leads to training results that are the best ones, and additionally allows a systematic procedure for training neural networks, with less heuristic parameter adjustments than the other methods.\",\"PeriodicalId\":448461,\"journal\":{\"name\":\"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2000-01-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SBRN.2000.889733\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. Vol.1. Sixth Brazilian Symposium on Neural Networks","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SBRN.2000.889733","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A multi-objective optimization approach for training artificial neural networks
Presents a learning scheme for training multilayer perceptrons (MLPs) with improved generalization ability. The algorithm employs a training algorithm based on a multi-objective optimization mechanism. This approach allows balancing between the training squared error and the norm of the network weight vector. This balancing is correlated with the trade-off between overfitting and underfitting. The method is applied to classification and regression problems and also compared with weight decay, support vector machines and standard backpropagation results. The proposed method leads to training results that are the best ones, and additionally allows a systematic procedure for training neural networks, with less heuristic parameter adjustments than the other methods.