{"title":"大会主席致辞","authors":"","doi":"10.1109/acsat.2014.5","DOIUrl":null,"url":null,"abstract":"Comparison of various methods of computational intelligence are presented and illustrated with examples. These methods include neural networks, fuzzy systems, and evolutionary computation. The presentation is focused on neural networks, fuzzy systems and neurofuzzy architectures. Various leaning method of neural networks including supervised and unsupervised methods are presented and illustrated with examples. General learning rule as a function of the incoming signals is discussed. Other learning rules such as Hebbian learning, perceptron learning, LMS Least Mean Square learning, delta learning, WTA – Winner Take All learning, and PCA Principal Component Analysis are presented as a derivation of the general learning rule. Architecture specific learning algorithms for cascade correlation networks, Sarajedini and Hecht-Nielsen networks, functional link networks, polynomial networks, counterpropagation networks, RBF-Radial Basis Function networks are described. Dedicated learning algorithms for on chip neural network training are also evaluated. The tutorial focuses on various practical methods such as Quickprop, RPROP, Back Percolation, Delta-bar-Delta and others. Main reasons of convergence difficulties such as local minima or flat spot problems are analyzed. More advance gradient-based methods including pseudo inversion learning, conjugate gradient, Newton and LM LevenbergMarquardt Algorithm are illustrated with examples. Advantages and disadvantages of fuzzy systems will be presented. Detailed comparison of Mamdani and Takagi-Sugeno approaches will be given. Various neuro-fuzzy architectures will be discussed. In the conclusion advantages and disadvantages of neural and fuzzy approaches will be discussed with a reference to their hardware implementation.","PeriodicalId":146759,"journal":{"name":"2021 IEEE Green Technologies Conference (GreenTech)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Message from the Conference General Chair\",\"authors\":\"\",\"doi\":\"10.1109/acsat.2014.5\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Comparison of various methods of computational intelligence are presented and illustrated with examples. These methods include neural networks, fuzzy systems, and evolutionary computation. The presentation is focused on neural networks, fuzzy systems and neurofuzzy architectures. Various leaning method of neural networks including supervised and unsupervised methods are presented and illustrated with examples. General learning rule as a function of the incoming signals is discussed. Other learning rules such as Hebbian learning, perceptron learning, LMS Least Mean Square learning, delta learning, WTA – Winner Take All learning, and PCA Principal Component Analysis are presented as a derivation of the general learning rule. Architecture specific learning algorithms for cascade correlation networks, Sarajedini and Hecht-Nielsen networks, functional link networks, polynomial networks, counterpropagation networks, RBF-Radial Basis Function networks are described. Dedicated learning algorithms for on chip neural network training are also evaluated. The tutorial focuses on various practical methods such as Quickprop, RPROP, Back Percolation, Delta-bar-Delta and others. Main reasons of convergence difficulties such as local minima or flat spot problems are analyzed. More advance gradient-based methods including pseudo inversion learning, conjugate gradient, Newton and LM LevenbergMarquardt Algorithm are illustrated with examples. Advantages and disadvantages of fuzzy systems will be presented. Detailed comparison of Mamdani and Takagi-Sugeno approaches will be given. Various neuro-fuzzy architectures will be discussed. In the conclusion advantages and disadvantages of neural and fuzzy approaches will be discussed with a reference to their hardware implementation.\",\"PeriodicalId\":146759,\"journal\":{\"name\":\"2021 IEEE Green Technologies Conference (GreenTech)\",\"volume\":\"14 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Green Technologies Conference (GreenTech)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/acsat.2014.5\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Green Technologies Conference (GreenTech)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/acsat.2014.5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
对计算智能的各种方法进行了比较,并举例说明。这些方法包括神经网络、模糊系统和进化计算。演讲的重点是神经网络,模糊系统和神经模糊架构。介绍了神经网络的各种学习方法,包括有监督学习和无监督学习。讨论了作为输入信号函数的一般学习规则。其他学习规则,如Hebbian学习、感知器学习、LMS最小均方学习、delta学习、WTA -赢家通吃学习和PCA主成分分析,作为一般学习规则的推导而提出。描述了级联相关网络、萨拉热窝网络和Hecht-Nielsen网络、功能链路网络、多项式网络、反传播网络、rbf -径向基函数网络的结构特定学习算法。此外,还评估了用于片上神经网络训练的专用学习算法。本教程侧重于各种实用方法,如Quickprop, RPROP, Back peration, Delta-bar-Delta等。分析了局部极小值问题、平点问题等收敛困难的主要原因。举例说明了基于梯度的更先进的方法,包括伪反转学习、共轭梯度、牛顿和LM LevenbergMarquardt算法。本文将介绍模糊系统的优缺点。将给出Mamdani和Takagi-Sugeno方法的详细比较。我们将讨论各种神经模糊架构。在结论部分,将讨论神经和模糊方法的优缺点,并参考它们的硬件实现。
Comparison of various methods of computational intelligence are presented and illustrated with examples. These methods include neural networks, fuzzy systems, and evolutionary computation. The presentation is focused on neural networks, fuzzy systems and neurofuzzy architectures. Various leaning method of neural networks including supervised and unsupervised methods are presented and illustrated with examples. General learning rule as a function of the incoming signals is discussed. Other learning rules such as Hebbian learning, perceptron learning, LMS Least Mean Square learning, delta learning, WTA – Winner Take All learning, and PCA Principal Component Analysis are presented as a derivation of the general learning rule. Architecture specific learning algorithms for cascade correlation networks, Sarajedini and Hecht-Nielsen networks, functional link networks, polynomial networks, counterpropagation networks, RBF-Radial Basis Function networks are described. Dedicated learning algorithms for on chip neural network training are also evaluated. The tutorial focuses on various practical methods such as Quickprop, RPROP, Back Percolation, Delta-bar-Delta and others. Main reasons of convergence difficulties such as local minima or flat spot problems are analyzed. More advance gradient-based methods including pseudo inversion learning, conjugate gradient, Newton and LM LevenbergMarquardt Algorithm are illustrated with examples. Advantages and disadvantages of fuzzy systems will be presented. Detailed comparison of Mamdani and Takagi-Sugeno approaches will be given. Various neuro-fuzzy architectures will be discussed. In the conclusion advantages and disadvantages of neural and fuzzy approaches will be discussed with a reference to their hardware implementation.