Proceedings of Fifth International Conference on Microelectronics for Neural Networks最新文献

筛选
英文 中文
Hardware-friendly learning algorithms for neural networks: an overview 神经网络的硬件友好学习算法:概述
E. FieslerIDIAPCP, P. Moerland
{"title":"Hardware-friendly learning algorithms for neural networks: an overview","authors":"E. FieslerIDIAPCP, P. Moerland","doi":"10.1109/MNNFS.1996.493781","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493781","url":null,"abstract":"The hardware implementation of artificial neural networks and their learning algorithms is a fascinating area of research with far-reaching applications. However, the mapping from an ideal mathematical model to compact and reliable hardware is far from evident. This paper presents an overview of various methods that simplify the hardware implementation of neural network models. Adaptations that are proper to specific learning rules or network architectures are discussed. These range from the use of perturbation in multilayer feedforward networks and local learning algorithms to quantization effects in self-organizing feature maps. Moreover, in more general terms, the problems of inaccuracy, limited precision, and robustness are treated.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124578457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
A low-power high-precision tunable WINNER-TAKE-ALL network 低功耗高精度可调谐赢家通吃网络
R. Canegallo, M. Chinosi, A. Kramer
{"title":"A low-power high-precision tunable WINNER-TAKE-ALL network","authors":"R. Canegallo, M. Chinosi, A. Kramer","doi":"10.1109/MNNFS.1996.493805","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493805","url":null,"abstract":"This paper describes a low power CMOS circuit for selecting the greatest of n analog voltages within a tunable selection range. An increasing speed-decreasing precision law is used to determine the amplitude of the selection range. 16 mV to 4 mV resolution, over a 2 V to 4 V dynamic input range, can be obtained by reducing the speed from 2 MHz to 500 kHz. 1 /spl mu/A quiescent current, 2 /spl mu/A AC current for the selected cells and small size make this circuit available for VLSI implementations of massively parallel analog computational circuits.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"109 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130660544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A current mode CMOS multi-layer perceptron chip 一种电流模式CMOS多层感知器芯片
G. M. Bo, D. Caviglia, M. Valle
{"title":"A current mode CMOS multi-layer perceptron chip","authors":"G. M. Bo, D. Caviglia, M. Valle","doi":"10.1109/MNNFS.1996.493778","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493778","url":null,"abstract":"An analog VLSI neural network integrated circuit is presented. It consist of a feedforward multi layer perceptron (MLP) network with 64 inputs, 64 hidden neurons and 10 outputs. The computational cells have been designed by using the current mode approach and weak inversion biased MOS transistors to reduce the occupied area and power consumption. The processing delay is less than 2 /spl mu/s and the total average power consumption is around 200 mW. This is equivalent to a computational power of about 2.5/spl times/10/sup 9/ connections per second. The chip can be employed in a chip-in-the-loop neural architecture.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"43 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132492861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Analog VLSI circuits for visual motion-based adaptation of post-saccadic drift 基于视觉运动的后跳跃漂移适应模拟VLSI电路
T. Horiuchi, C. Koch
{"title":"Analog VLSI circuits for visual motion-based adaptation of post-saccadic drift","authors":"T. Horiuchi, C. Koch","doi":"10.1109/MNNFS.1996.493773","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493773","url":null,"abstract":"Using the analog VLSI-based saccadic eye movement system previously developed we investigate the use of biologically realistic error signals to calibrate the system in a manner similar to the primate oculomotor system. In this paper we introduce two new circuit components which are used to perform this task, a resettable-integrator model of the burst generator with a floating-gate structure to provide on-chip storage of analog parameters and a directionally-selective motion detector for detecting post-saccadic drift.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125494681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Computational image sensors for on-sensor-compression 用于传感器上压缩的计算图像传感器
T. Hamamoto, Y. Egi, M. Hatori, K. Aizawa, T. Okubo, H. Maruyama, E. Fossum
{"title":"Computational image sensors for on-sensor-compression","authors":"T. Hamamoto, Y. Egi, M. Hatori, K. Aizawa, T. Okubo, H. Maruyama, E. Fossum","doi":"10.1109/MNNFS.1996.493806","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493806","url":null,"abstract":"In this paper, we propose novel image sensors which compress image signal. By making use of very fast analog processing on the imager plane, the compression sensor can significantly reduce the amount of pixel data output from the sensor. The proposed sensor is intended to overcome the communication bottle neck for high pixel rate imaging such as high frame rate imaging and high resolution imaging. The compression sensor consists of three parts; transducer, memory and processor. Two architectures for on-sensor-compression are discussed in this paper that are pixel parallel architecture and column parallel architecture. In the former architecture, the three parts are put together in each pixel, and processing is pixel parallel. In the latter architecture, transducer, processor and memory areas are separated, and processing is column parallel. We also describe a prototype chip of pixel-parallel-type sensor with 32/spl times/32 pixels which has been fabricated using 2 /spl mu/m CMOS technology. Some results of examinations are shown in this paper.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132544630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Implementation of time-multiplexed CNN building block cell 时间复用CNN构建块单元的实现
K. K. Lai, P. Leong
{"title":"Implementation of time-multiplexed CNN building block cell","authors":"K. K. Lai, P. Leong","doi":"10.1109/MNNFS.1996.493775","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493775","url":null,"abstract":"We have proposed an area efficient implementation of Cellular Neural Network by using the time-multiplexed method. This paper describes the underlying theory, method, and the circuit architecture of a VLSI implementation. Spice simulation results have been obtained to illustrate the circuit operation. A building block cell of a time-multiplexed cellular neural network has been completed and is currently being fabricated.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
On-chip backpropagation training using parallel stochastic bit streams 使用并行随机比特流的片上反向传播训练
Kuno Kollmann, K. Riemschneider, Hans Christoph
{"title":"On-chip backpropagation training using parallel stochastic bit streams","authors":"Kuno Kollmann, K. Riemschneider, Hans Christoph","doi":"10.1109/MNNFS.1996.493785","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493785","url":null,"abstract":"It is proposed to use stochastic arithmetic computing for all arithmetic operations of training and processing backpropagation nets. In this way it is possible to design simple processing elements which fulfil all the requirements of information processing using values coded as independent stochastic bit streams. Combining such processing elements silicon saving and full parallel neural networks of variable structure and capacity are available supporting the complete implementation of the error backpropagation algorithm in hardware. A sign considering method of coding as proposed which allows a homogeneous implementation of the net without separating it into an inhibitoric and an excitatoric part. Furthermore, parameterizable nonlinearities based on stochastic automata are used. Comparable to the momentum (pulse term) and improving the training of a net there is a sequential arrangement of adaptive and integrative elements influencing the weights and implemented stochastically, too. Experimental hardware implementations based on PLD's/FPGA's and a first silicon prototype are realized.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132259811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
On-line hand-printing recognition with neural networks 基于神经网络的在线手印识别
R. Lyon, L. Yaeger
{"title":"On-line hand-printing recognition with neural networks","authors":"R. Lyon, L. Yaeger","doi":"10.1109/MNNFS.1996.493792","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493792","url":null,"abstract":"The need for fast and accurate text entry on small handheld computers has led to a resurgence of interest in on-line word recognition using artificial neural networks. Classical methods have been combined and improved to produce robust recognition of hand-printed English text. The central concept of a neural net as a character classifier provides a good base for a recognition system; long-standing issues relative to training generalization, segmentation, probabilistic formalisms, etc., need to resolved, however, to get adequate performance. A number of innovations in how to use a neural net as a classifier in a word recognizer are presented: negative training, stroke warping, balancing, normalized output error, error emphasis, multiple representations, quantized weights, and integrated word segmentation all contribute to efficient and robust performance.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"36 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133708374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
A variable-precision systolic architecture for ANN computation 一种用于人工神经网络计算的变精度收缩结构
Amine Bermak, D. Martinez
{"title":"A variable-precision systolic architecture for ANN computation","authors":"Amine Bermak, D. Martinez","doi":"10.1109/MNNFS.1996.493814","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493814","url":null,"abstract":"When Artificial Neural Networks (ANNs) are implemented in VLSI with fixed precision arithmetic, the accumulation of numerical errors may lead to results which are completely inaccurate. To avoid this, we propose a variable-precision arithmetic in which the precision of the computation is specified by the user at each layer in the network. This paper presents a top-down approach for designing an efficient bit-level systolic architecture for variable precision neural computation.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133825770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Single electron tunneling technology for neural networks 神经网络的单电子隧道技术
M. Goossens, C. Verhoeven, A. V. van Roermund
{"title":"Single electron tunneling technology for neural networks","authors":"M. Goossens, C. Verhoeven, A. V. van Roermund","doi":"10.1109/MNNFS.1996.493782","DOIUrl":"https://doi.org/10.1109/MNNFS.1996.493782","url":null,"abstract":"A new neural network hardware concept based on single electron tunneling is presented. Single electron tunneling transistors have some advantageous properties which make them very attractive to make neural networks, among which their very small size, extremely low power consumption and potentially high speed. After a brief description of the technology, the relevant properties of SET transistors are described. Simulations have been performed on some small circuits of SET transistors that exhibit functional properties similar to those required for neural networks. Finally, interconnecting the building blocks to form a neural network is analyzed.","PeriodicalId":151891,"journal":{"name":"Proceedings of Fifth International Conference on Microelectronics for Neural Networks","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121387382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信