{"title":"Parallel digital improvements of neural networks [Book Reviews]","authors":"R. Tadeusiewicz","doi":"10.1109/M-PDT.1996.532144","DOIUrl":null,"url":null,"abstract":"Neural networks have increased not only in the number of applications but also in complexity. This increase in complexity has created a tremendous need for computational power, perhaps more power than conventional scalar processors can deliver efficiently. Such processors are oriented toward numeric and data manipulation. Neurocomputing requirements (such as nonprogramming and learning) impose different constraints and demands on the computer architectures and on the structure of multicomputer systems. We need new neurocomputers, dedicated to neural networks applications. This is the scope of Parallel Digital Implementations of Neural Networks. T h e surge of interest in neural networks, which started in the mid-eighties, stemmed largely from advances in VLSI technology. But hardware implementations of neural networks are still not as popular as the software tools for neural network modeling, learning, and applications. Information on hardware neural network implementations is still too limited and exotic for many neural network users. This book fills an important gap for such users. Neural networks have recently become such a subject of great interest to so many scientists, engineers, and smdents that you can easily find many books and papers about implementations (for example, Analogue Neural VLSI, by A. Murray and L. Tarassenko, Chapman & Hall; Neurocomputers: An Overview o f Neural Networks in VLSI, by M. Glesner and W. Poechmueller, Chapman & Hall; and VLSIfor Neural Networks and Art-ificial Intelligence, byJ.G. Delgado-Frias and W.R. Moore, Plenum Press). However, this book is different. It is wellfocused; it does not discuss all forms of VLSI neural network implementations, but presents only the most interesting and most important: parallel digital implementations. No analog circuits, no serial architecrures, no computer models. Only digital devices (general-purpose processors, such as array processors and DSP chips, or dedicated systems such as neurocomputers or digital neurochips), and only parallel solutions. This narrow focus is good, because the digital implementations of neural networks provide advantages such as freedom from noise, programmability, higher precision, and reliable storage devices. The book has three main sections:","PeriodicalId":325213,"journal":{"name":"IEEE Parallel & Distributed Technology: Systems & Applications","volume":"73 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1996-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Parallel & Distributed Technology: Systems & Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/M-PDT.1996.532144","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Neural networks have increased not only in the number of applications but also in complexity. This increase in complexity has created a tremendous need for computational power, perhaps more power than conventional scalar processors can deliver efficiently. Such processors are oriented toward numeric and data manipulation. Neurocomputing requirements (such as nonprogramming and learning) impose different constraints and demands on the computer architectures and on the structure of multicomputer systems. We need new neurocomputers, dedicated to neural networks applications. This is the scope of Parallel Digital Implementations of Neural Networks. T h e surge of interest in neural networks, which started in the mid-eighties, stemmed largely from advances in VLSI technology. But hardware implementations of neural networks are still not as popular as the software tools for neural network modeling, learning, and applications. Information on hardware neural network implementations is still too limited and exotic for many neural network users. This book fills an important gap for such users. Neural networks have recently become such a subject of great interest to so many scientists, engineers, and smdents that you can easily find many books and papers about implementations (for example, Analogue Neural VLSI, by A. Murray and L. Tarassenko, Chapman & Hall; Neurocomputers: An Overview o f Neural Networks in VLSI, by M. Glesner and W. Poechmueller, Chapman & Hall; and VLSIfor Neural Networks and Art-ificial Intelligence, byJ.G. Delgado-Frias and W.R. Moore, Plenum Press). However, this book is different. It is wellfocused; it does not discuss all forms of VLSI neural network implementations, but presents only the most interesting and most important: parallel digital implementations. No analog circuits, no serial architecrures, no computer models. Only digital devices (general-purpose processors, such as array processors and DSP chips, or dedicated systems such as neurocomputers or digital neurochips), and only parallel solutions. This narrow focus is good, because the digital implementations of neural networks provide advantages such as freedom from noise, programmability, higher precision, and reliable storage devices. The book has three main sections: