基于现场可编程门阵列的多层神经网络增量通信分析

J. R. Dick, K. Kent
{"title":"基于现场可编程门阵列的多层神经网络增量通信分析","authors":"J. R. Dick, K. Kent","doi":"10.1109/ISVLSI.2005.18","DOIUrl":null,"url":null,"abstract":"A neural network is a massively parallel distributed processor made up of simple processing units known as neurons. These neurons are organized in layers and every neuron in each layer is connected to each neuron in the adjacent layers. This connection architecture makes for an enormous number of communication links between neurons This is a major issue when considering a hardware implementation of a neural network since communication links take up hardware space, and hardware space costs money. To overcome this space problem incremental communication for multilayer neural networks has been proposed. Incremental communication works by only communicating the change in value between neurons as opposed to the entire magnitude of the value. This allows for the numbers to be represented with a fewer number of bits, and thus can be communicated with narrower communication links. To validate the idea of incremental communication an incremental communication neural network was designed and implemented, and then compared to a traditional neural network. From the implementation it is seen that even though the incremental communication neural network saves design space through reduced communication links, the additional resources necessary to shape the data for transmission outweighs any design space savings when targeting a modern FPGA.","PeriodicalId":158790,"journal":{"name":"IEEE Computer Society Annual Symposium on VLSI: New Frontiers in VLSI Design (ISVLSI'05)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2005-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Analysis of incremental communication for multilayer neural networks on a field programmable gate array\",\"authors\":\"J. R. Dick, K. Kent\",\"doi\":\"10.1109/ISVLSI.2005.18\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A neural network is a massively parallel distributed processor made up of simple processing units known as neurons. These neurons are organized in layers and every neuron in each layer is connected to each neuron in the adjacent layers. This connection architecture makes for an enormous number of communication links between neurons This is a major issue when considering a hardware implementation of a neural network since communication links take up hardware space, and hardware space costs money. To overcome this space problem incremental communication for multilayer neural networks has been proposed. Incremental communication works by only communicating the change in value between neurons as opposed to the entire magnitude of the value. This allows for the numbers to be represented with a fewer number of bits, and thus can be communicated with narrower communication links. To validate the idea of incremental communication an incremental communication neural network was designed and implemented, and then compared to a traditional neural network. From the implementation it is seen that even though the incremental communication neural network saves design space through reduced communication links, the additional resources necessary to shape the data for transmission outweighs any design space savings when targeting a modern FPGA.\",\"PeriodicalId\":158790,\"journal\":{\"name\":\"IEEE Computer Society Annual Symposium on VLSI: New Frontiers in VLSI Design (ISVLSI'05)\",\"volume\":\"1 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-05-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Computer Society Annual Symposium on VLSI: New Frontiers in VLSI Design (ISVLSI'05)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ISVLSI.2005.18\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Computer Society Annual Symposium on VLSI: New Frontiers in VLSI Design (ISVLSI'05)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2005.18","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

神经网络是一个大规模并行分布式处理器,由称为神经元的简单处理单元组成。这些神经元是分层组织的,每一层中的每个神经元都与相邻层中的每个神经元相连。这种连接架构使得神经元之间有大量的通信链接,这是考虑神经网络的硬件实现时的一个主要问题,因为通信链接占用硬件空间,而硬件空间是要花钱的。为了克服这一空间问题,提出了多层神经网络的增量通信方法。增量通信的工作原理是只在神经元之间传递值的变化,而不是整个值的大小。这允许用更少的位数表示数字,因此可以用更窄的通信链路进行通信。为了验证增量通信的思想,设计并实现了增量通信神经网络,并与传统神经网络进行了比较。从实现中可以看出,尽管增量通信神经网络通过减少通信链路节省了设计空间,但当以现代FPGA为目标时,塑造传输数据所需的额外资源超过了任何设计空间节省。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Analysis of incremental communication for multilayer neural networks on a field programmable gate array
A neural network is a massively parallel distributed processor made up of simple processing units known as neurons. These neurons are organized in layers and every neuron in each layer is connected to each neuron in the adjacent layers. This connection architecture makes for an enormous number of communication links between neurons This is a major issue when considering a hardware implementation of a neural network since communication links take up hardware space, and hardware space costs money. To overcome this space problem incremental communication for multilayer neural networks has been proposed. Incremental communication works by only communicating the change in value between neurons as opposed to the entire magnitude of the value. This allows for the numbers to be represented with a fewer number of bits, and thus can be communicated with narrower communication links. To validate the idea of incremental communication an incremental communication neural network was designed and implemented, and then compared to a traditional neural network. From the implementation it is seen that even though the incremental communication neural network saves design space through reduced communication links, the additional resources necessary to shape the data for transmission outweighs any design space savings when targeting a modern FPGA.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信