{"title":"用逐步解麻烦的方法克服多层感知器训练中的局部最小问题","authors":"J. Lo, Yichuan Gui, Yun Peng","doi":"10.1109/IJCNN.2013.6706796","DOIUrl":null,"url":null,"abstract":"A method of training neural networks using the risk-averting error (RAE) criterion Jλ (w), which was presented in IJCNN 2001, has the capability to avoid nonglobal local minima, but suffers from a severe limitation on the magnitude of the risk-sensitivity index λ. To eliminating the limitation, an improved method using the normalized RAE (NRAE) Cλ (w) was proposed in ISNN 2012, but it requires a selection of a proper λ, whose range may be dependent on the application. A new training method called the gradual deconvexification (GDC) is proposed in this paper. It starts with a very large λ and gradually decreases it in the training process until a global minimum of Cλ (w) or a good generalization capability is achieved. GDC training method was tested on a large number of numerical examples and produced a very good result in each test.","PeriodicalId":376975,"journal":{"name":"The 2013 International Joint Conference on Neural Networks (IJCNN)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Overcoming the local-minimum problem in training multilayer perceptrons by gradual deconvexification\",\"authors\":\"J. Lo, Yichuan Gui, Yun Peng\",\"doi\":\"10.1109/IJCNN.2013.6706796\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A method of training neural networks using the risk-averting error (RAE) criterion Jλ (w), which was presented in IJCNN 2001, has the capability to avoid nonglobal local minima, but suffers from a severe limitation on the magnitude of the risk-sensitivity index λ. To eliminating the limitation, an improved method using the normalized RAE (NRAE) Cλ (w) was proposed in ISNN 2012, but it requires a selection of a proper λ, whose range may be dependent on the application. A new training method called the gradual deconvexification (GDC) is proposed in this paper. It starts with a very large λ and gradually decreases it in the training process until a global minimum of Cλ (w) or a good generalization capability is achieved. GDC training method was tested on a large number of numerical examples and produced a very good result in each test.\",\"PeriodicalId\":376975,\"journal\":{\"name\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The 2013 International Joint Conference on Neural Networks (IJCNN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2013.6706796\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The 2013 International Joint Conference on Neural Networks (IJCNN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2013.6706796","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Overcoming the local-minimum problem in training multilayer perceptrons by gradual deconvexification
A method of training neural networks using the risk-averting error (RAE) criterion Jλ (w), which was presented in IJCNN 2001, has the capability to avoid nonglobal local minima, but suffers from a severe limitation on the magnitude of the risk-sensitivity index λ. To eliminating the limitation, an improved method using the normalized RAE (NRAE) Cλ (w) was proposed in ISNN 2012, but it requires a selection of a proper λ, whose range may be dependent on the application. A new training method called the gradual deconvexification (GDC) is proposed in this paper. It starts with a very large λ and gradually decreases it in the training process until a global minimum of Cλ (w) or a good generalization capability is achieved. GDC training method was tested on a large number of numerical examples and produced a very good result in each test.