{"title":"自定义神经网络架构的反向传播学习算法的并行公式:结果摘要","authors":"Minesh B. Anim, S. Shekhar","doi":"10.1109/TAI.1994.346497","DOIUrl":null,"url":null,"abstract":"Several generic parallel formulations of the backpropagation learning algorithm have been proposed recently. Further speedups are possible by customizing parallel formulations to the architecture of the neural network. The paper addresses the issue of customizing parallel formulations of the backpropagation learning algorithm to a given neural network architecture on multiprocessors with hypercube-like communication topology. We introduce a new parallel formulation called rectangular checkerboarding which adapts to the network architecture and can provide performance gains for non-uniform neural networks, where the number of nodes vary across the layers. Algebraic analysis shows that each instance of rectangular checkerboarding (using a specific rectangular processor grid) is optimal for an important family of network architectures. Experiments on CM-5 show that customizing to network architecture can provide significant (/spl sim/50%) performance gains for many interesting non-uniform neural network architectures, which are currently used in important applications. We also introduce the staircase framework, which can use different processor grids for different layers of a neural network.<<ETX>>","PeriodicalId":262014,"journal":{"name":"Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94","volume":"188 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":"{\"title\":\"Customizing parallel formulations of backpropagation learning algorithm to neural network architectures: a summary of result\",\"authors\":\"Minesh B. Anim, S. Shekhar\",\"doi\":\"10.1109/TAI.1994.346497\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Several generic parallel formulations of the backpropagation learning algorithm have been proposed recently. Further speedups are possible by customizing parallel formulations to the architecture of the neural network. The paper addresses the issue of customizing parallel formulations of the backpropagation learning algorithm to a given neural network architecture on multiprocessors with hypercube-like communication topology. We introduce a new parallel formulation called rectangular checkerboarding which adapts to the network architecture and can provide performance gains for non-uniform neural networks, where the number of nodes vary across the layers. Algebraic analysis shows that each instance of rectangular checkerboarding (using a specific rectangular processor grid) is optimal for an important family of network architectures. Experiments on CM-5 show that customizing to network architecture can provide significant (/spl sim/50%) performance gains for many interesting non-uniform neural network architectures, which are currently used in important applications. We also introduce the staircase framework, which can use different processor grids for different layers of a neural network.<<ETX>>\",\"PeriodicalId\":262014,\"journal\":{\"name\":\"Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94\",\"volume\":\"188 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1994-11-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"7\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TAI.1994.346497\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAI.1994.346497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Customizing parallel formulations of backpropagation learning algorithm to neural network architectures: a summary of result
Several generic parallel formulations of the backpropagation learning algorithm have been proposed recently. Further speedups are possible by customizing parallel formulations to the architecture of the neural network. The paper addresses the issue of customizing parallel formulations of the backpropagation learning algorithm to a given neural network architecture on multiprocessors with hypercube-like communication topology. We introduce a new parallel formulation called rectangular checkerboarding which adapts to the network architecture and can provide performance gains for non-uniform neural networks, where the number of nodes vary across the layers. Algebraic analysis shows that each instance of rectangular checkerboarding (using a specific rectangular processor grid) is optimal for an important family of network architectures. Experiments on CM-5 show that customizing to network architecture can provide significant (/spl sim/50%) performance gains for many interesting non-uniform neural network architectures, which are currently used in important applications. We also introduce the staircase framework, which can use different processor grids for different layers of a neural network.<>