基于DPLL的赢家通吃神经网络的监督学习

Masaki Azuma, H. Hikawa
{"title":"基于DPLL的赢家通吃神经网络的监督学习","authors":"Masaki Azuma, H. Hikawa","doi":"10.1109/ICES.2014.7008730","DOIUrl":null,"url":null,"abstract":"Neural networks are widely used in various fields due to their superior learning abilities. This paper proposes a hardware winner-take-all neural network (WTANN) that employs a new winner-take-all (WTA) circuit with phase-modulated pulse signals and digital phase-locked loops (DPLLs). The system uses DPLL as a computing element, so all input values are expressed by phases of rectangular signals. The proposed WTA circuit employs a simple winner search circuit. The proposed WTANN architecture is described by very high speed integrated circuit (VHSIC) Hardware Description Language (VHDL) and its feasibility was tested and verified through simulations. Conventional WTA takes a centralized winner search approach, in which vector distances are collected from all neurons and compared. In contrast, the winner search in the proposed system is carried out locally by a distributed winner search circuit among neurons. Therefore, no global communication channels with a wide bandwidth between the winner search module and each neuron are required. Furthermore, the proposed WTANN can easily extend the system scale, merely by increasing the number of neurons. Vector classifications with WTANN using two kinds of data sets, Iris and Wine, were carried out in VHDL simulations. The circuit size and speed were then evaluated by applying the VHDL description to a logic synthesis tool and experiments using FPGA. The results revealed that the proposed WTANN achieved valid learning.","PeriodicalId":432958,"journal":{"name":"2014 IEEE International Conference on Evolvable Systems","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Supervised learning of DPLL based winner-take-all neural network\",\"authors\":\"Masaki Azuma, H. Hikawa\",\"doi\":\"10.1109/ICES.2014.7008730\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Neural networks are widely used in various fields due to their superior learning abilities. This paper proposes a hardware winner-take-all neural network (WTANN) that employs a new winner-take-all (WTA) circuit with phase-modulated pulse signals and digital phase-locked loops (DPLLs). The system uses DPLL as a computing element, so all input values are expressed by phases of rectangular signals. The proposed WTA circuit employs a simple winner search circuit. The proposed WTANN architecture is described by very high speed integrated circuit (VHSIC) Hardware Description Language (VHDL) and its feasibility was tested and verified through simulations. Conventional WTA takes a centralized winner search approach, in which vector distances are collected from all neurons and compared. In contrast, the winner search in the proposed system is carried out locally by a distributed winner search circuit among neurons. Therefore, no global communication channels with a wide bandwidth between the winner search module and each neuron are required. Furthermore, the proposed WTANN can easily extend the system scale, merely by increasing the number of neurons. Vector classifications with WTANN using two kinds of data sets, Iris and Wine, were carried out in VHDL simulations. The circuit size and speed were then evaluated by applying the VHDL description to a logic synthesis tool and experiments using FPGA. The results revealed that the proposed WTANN achieved valid learning.\",\"PeriodicalId\":432958,\"journal\":{\"name\":\"2014 IEEE International Conference on Evolvable Systems\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2014-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2014 IEEE International Conference on Evolvable Systems\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICES.2014.7008730\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 IEEE International Conference on Evolvable Systems","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICES.2014.7008730","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

摘要

神经网络以其优越的学习能力被广泛应用于各个领域。本文提出了一种硬件赢家通吃神经网络(WTANN),该网络采用一种新的赢家通吃(WTA)电路,该电路采用调相脉冲信号和数字锁相环(dpll)。该系统采用DPLL作为计算元件,所有输入值均用矩形信号的相位表示。提出的WTA电路采用了一个简单的赢家搜索电路。采用超高速集成电路(VHSIC)硬件描述语言(VHDL)描述了所提出的WTANN体系结构,并通过仿真验证了其可行性。传统的WTA采用集中的赢家搜索方法,从所有神经元中收集向量距离并进行比较。相比之下,该系统的赢家搜索是由神经元之间的分布式赢家搜索电路局部完成的。因此,获胜者搜索模块与每个神经元之间不需要具有宽带宽的全局通信通道。此外,所提出的WTANN可以很容易地扩展系统规模,只需增加神经元的数量。利用Iris和Wine两种数据集,在VHDL仿真中进行了WTANN向量分类。然后通过将VHDL描述应用于逻辑合成工具和FPGA实验来评估电路的尺寸和速度。结果表明,所提出的WTANN实现了有效的学习。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Supervised learning of DPLL based winner-take-all neural network
Neural networks are widely used in various fields due to their superior learning abilities. This paper proposes a hardware winner-take-all neural network (WTANN) that employs a new winner-take-all (WTA) circuit with phase-modulated pulse signals and digital phase-locked loops (DPLLs). The system uses DPLL as a computing element, so all input values are expressed by phases of rectangular signals. The proposed WTA circuit employs a simple winner search circuit. The proposed WTANN architecture is described by very high speed integrated circuit (VHSIC) Hardware Description Language (VHDL) and its feasibility was tested and verified through simulations. Conventional WTA takes a centralized winner search approach, in which vector distances are collected from all neurons and compared. In contrast, the winner search in the proposed system is carried out locally by a distributed winner search circuit among neurons. Therefore, no global communication channels with a wide bandwidth between the winner search module and each neuron are required. Furthermore, the proposed WTANN can easily extend the system scale, merely by increasing the number of neurons. Vector classifications with WTANN using two kinds of data sets, Iris and Wine, were carried out in VHDL simulations. The circuit size and speed were then evaluated by applying the VHDL description to a logic synthesis tool and experiments using FPGA. The results revealed that the proposed WTANN achieved valid learning.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信