{"title":"Self-organization in CNN-based Object Nets","authors":"L. Werbos, P. Werbos","doi":"10.1109/CNNA.2010.5430292","DOIUrl":null,"url":null,"abstract":"Cellular Neural Network (CNN) chips containing a thousand times as many processors as conventional programmable chips can offer a huge improvement in computational throughput, for those applications they are able to address. The artificial neural network (ANN) community has developed new learning designs and topologies, consistent with CNN, which can provide very general capabilities, especially for tasks calling for optimal decision-making or control or for prediction. The Multilayer Perceptron (MLP), a conventional ANN, approximates smooth input-output relations or functions of many variables much better than traditional universal approximators like Taylor series or piecewise approximations; however, to cope with even larger systems such as megapixel image recognition or control of electric power grids, it is necessary to move to a family of more complex ANNs such as cellular Simultaneous Recurrent Networks (CSRN), Object Networks, and networks with object symmetry and small world connectivity, which can be emulated on CNNs. Kozma and Werbos have recently patented a new learning algorithm which achieved adequate learning speed in training such networks to handle tasks beyond the capacity of conventional ANNs. A new center at the FedEx Institute of Technology plans to use and extend these capabilities in many directions.","PeriodicalId":336891,"journal":{"name":"2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA 2010)","volume":"188 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 12th International Workshop on Cellular Nanoscale Networks and their Applications (CNNA 2010)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CNNA.2010.5430292","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Cellular Neural Network (CNN) chips containing a thousand times as many processors as conventional programmable chips can offer a huge improvement in computational throughput, for those applications they are able to address. The artificial neural network (ANN) community has developed new learning designs and topologies, consistent with CNN, which can provide very general capabilities, especially for tasks calling for optimal decision-making or control or for prediction. The Multilayer Perceptron (MLP), a conventional ANN, approximates smooth input-output relations or functions of many variables much better than traditional universal approximators like Taylor series or piecewise approximations; however, to cope with even larger systems such as megapixel image recognition or control of electric power grids, it is necessary to move to a family of more complex ANNs such as cellular Simultaneous Recurrent Networks (CSRN), Object Networks, and networks with object symmetry and small world connectivity, which can be emulated on CNNs. Kozma and Werbos have recently patented a new learning algorithm which achieved adequate learning speed in training such networks to handle tasks beyond the capacity of conventional ANNs. A new center at the FedEx Institute of Technology plans to use and extend these capabilities in many directions.