{"title":"复杂工业应用的神经网络训练","authors":"H. Vanlandingham, F. Azam, W. Pulliam","doi":"10.1109/SMCIA.2001.936720","DOIUrl":null,"url":null,"abstract":"The paper presents two methods of training multilayer perceptrons (MLPs) that use both functional values and co-located derivative values during the training process. The first method extends the standard backpropagation training algorithm for MLPs whereas the second method employs genetic algorithms (GAs) to find the optimal neural network weights using both functional and co-located function derivative values. The GAs used for optimization of the weights of a feedforward artificial neural network use a special reordering of the genotype before recombination. The ultimate goal of this research effort is to be able to train and design an artificial neural networks (ANN) more effectively, i.e., to have a network that generalizes better, learns faster and requires fewer training data points. The initial results indicate that the methods do, in fact, provide good generalization while requiring only a relatively sparse sampling of the function and its derivative values during the training phase, as indicated by the illustrative examples.","PeriodicalId":104202,"journal":{"name":"SMCia/01. Proceedings of the 2001 IEEE Mountain Workshop on Soft Computing in Industrial Applications (Cat. No.01EX504)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Neural network training for complex industrial applications\",\"authors\":\"H. Vanlandingham, F. Azam, W. Pulliam\",\"doi\":\"10.1109/SMCIA.2001.936720\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The paper presents two methods of training multilayer perceptrons (MLPs) that use both functional values and co-located derivative values during the training process. The first method extends the standard backpropagation training algorithm for MLPs whereas the second method employs genetic algorithms (GAs) to find the optimal neural network weights using both functional and co-located function derivative values. The GAs used for optimization of the weights of a feedforward artificial neural network use a special reordering of the genotype before recombination. The ultimate goal of this research effort is to be able to train and design an artificial neural networks (ANN) more effectively, i.e., to have a network that generalizes better, learns faster and requires fewer training data points. The initial results indicate that the methods do, in fact, provide good generalization while requiring only a relatively sparse sampling of the function and its derivative values during the training phase, as indicated by the illustrative examples.\",\"PeriodicalId\":104202,\"journal\":{\"name\":\"SMCia/01. Proceedings of the 2001 IEEE Mountain Workshop on Soft Computing in Industrial Applications (Cat. No.01EX504)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-06-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SMCia/01. Proceedings of the 2001 IEEE Mountain Workshop on Soft Computing in Industrial Applications (Cat. No.01EX504)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SMCIA.2001.936720\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SMCia/01. Proceedings of the 2001 IEEE Mountain Workshop on Soft Computing in Industrial Applications (Cat. No.01EX504)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SMCIA.2001.936720","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Neural network training for complex industrial applications
The paper presents two methods of training multilayer perceptrons (MLPs) that use both functional values and co-located derivative values during the training process. The first method extends the standard backpropagation training algorithm for MLPs whereas the second method employs genetic algorithms (GAs) to find the optimal neural network weights using both functional and co-located function derivative values. The GAs used for optimization of the weights of a feedforward artificial neural network use a special reordering of the genotype before recombination. The ultimate goal of this research effort is to be able to train and design an artificial neural networks (ANN) more effectively, i.e., to have a network that generalizes better, learns faster and requires fewer training data points. The initial results indicate that the methods do, in fact, provide good generalization while requiring only a relatively sparse sampling of the function and its derivative values during the training phase, as indicated by the illustrative examples.