Customizing parallel formulations of backpropagation learning algorithm to neural network architectures: a summary of result

Minesh B. Anim, S. Shekhar
{"title":"Customizing parallel formulations of backpropagation learning algorithm to neural network architectures: a summary of result","authors":"Minesh B. Anim, S. Shekhar","doi":"10.1109/TAI.1994.346497","DOIUrl":null,"url":null,"abstract":"Several generic parallel formulations of the backpropagation learning algorithm have been proposed recently. Further speedups are possible by customizing parallel formulations to the architecture of the neural network. The paper addresses the issue of customizing parallel formulations of the backpropagation learning algorithm to a given neural network architecture on multiprocessors with hypercube-like communication topology. We introduce a new parallel formulation called rectangular checkerboarding which adapts to the network architecture and can provide performance gains for non-uniform neural networks, where the number of nodes vary across the layers. Algebraic analysis shows that each instance of rectangular checkerboarding (using a specific rectangular processor grid) is optimal for an important family of network architectures. Experiments on CM-5 show that customizing to network architecture can provide significant (/spl sim/50%) performance gains for many interesting non-uniform neural network architectures, which are currently used in important applications. We also introduce the staircase framework, which can use different processor grids for different layers of a neural network.<<ETX>>","PeriodicalId":262014,"journal":{"name":"Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94","volume":"188 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1994-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings Sixth International Conference on Tools with Artificial Intelligence. TAI 94","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TAI.1994.346497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7

Abstract

Several generic parallel formulations of the backpropagation learning algorithm have been proposed recently. Further speedups are possible by customizing parallel formulations to the architecture of the neural network. The paper addresses the issue of customizing parallel formulations of the backpropagation learning algorithm to a given neural network architecture on multiprocessors with hypercube-like communication topology. We introduce a new parallel formulation called rectangular checkerboarding which adapts to the network architecture and can provide performance gains for non-uniform neural networks, where the number of nodes vary across the layers. Algebraic analysis shows that each instance of rectangular checkerboarding (using a specific rectangular processor grid) is optimal for an important family of network architectures. Experiments on CM-5 show that customizing to network architecture can provide significant (/spl sim/50%) performance gains for many interesting non-uniform neural network architectures, which are currently used in important applications. We also introduce the staircase framework, which can use different processor grids for different layers of a neural network.<>
自定义神经网络架构的反向传播学习算法的并行公式:结果摘要
最近提出了几种反向传播学习算法的通用并行公式。通过定制神经网络架构的并行公式,进一步加速是可能的。本文解决了在具有超立方体通信拓扑的多处理器上为给定神经网络架构定制反向传播学习算法的并行公式的问题。我们引入了一种新的并行公式,称为矩形棋盘格,它适应网络架构,可以为非均匀神经网络提供性能提升,其中节点数量在各层之间变化。代数分析表明,矩形棋盘格的每个实例(使用特定的矩形处理器网格)对于一个重要的网络体系结构家族是最优的。CM-5上的实验表明,定制网络架构可以为许多有趣的非均匀神经网络架构提供显著的(/spl sim/50%)性能提升,这些架构目前在重要应用中使用。我们还介绍了阶梯框架,它可以为神经网络的不同层使用不同的处理器网格。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信