{"title":"一种新型高性能、低能耗的多功能神经网络","authors":"L. M. Zhang","doi":"10.1109/ICCI-CC.2016.7862082","DOIUrl":null,"url":null,"abstract":"A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.","PeriodicalId":135701,"journal":{"name":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"A new multifunctional neural network with high performance and low energy consumption\",\"authors\":\"L. M. Zhang\",\"doi\":\"10.1109/ICCI-CC.2016.7862082\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.\",\"PeriodicalId\":135701,\"journal\":{\"name\":\"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"volume\":\"20 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCI-CC.2016.7862082\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE 15th International Conference on Cognitive Informatics & Cognitive Computing (ICCI*CC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCI-CC.2016.7862082","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
摘要
常见的人工神经网络(ANN)对所有隐藏神经元和输出神经元使用相同的激活函数。因此,由于数学功能单一,对复杂的大数据分析存在优化限制。此外,激活函数复杂的人工神经网络训练时间长,能量消耗大。为了解决这些问题,本文提出了一种新的节能“多功能神经网络”(MNN),该网络使用多种不同的激活函数来有效提高性能并显着降低能耗。设计了一种通用的训练算法来优化权重、偏置和函数选择,以提高性能,同时仍然实现相对较快的计算时间和减少能量使用。提出了一种新的通用学习算法来训练新型节能MNN。在性能分析方面,一种新的“遗传深度多功能神经网络”(Genetic Deep Multifunctional Neural Network, GDMNN)利用遗传算法对权重和偏置进行优化,并为所有神经元选择性能最佳的节能激活函数集。大量的仿真结果表明,优化后的GDMNN在实现高性能(预测精度)、低能耗和快速训练时间方面优于其他GDMNN。未来的工作包括(1)为MNN开发更有效节能的学习算法,用于数据挖掘应用问题,以及(2)使用并行云计算方法显着加快MNN的训练速度。
A new multifunctional neural network with high performance and low energy consumption
A common artificial neural network (ANN) uses the same activation function for all hidden and output neurons. Therefore, it has an optimization limitation for complex big data analysis due to its single mathematical functionality. In addition, an ANN with a complicated activation function uses a very long training time and consumes a lot of energy. To address these issues, this paper presents a new energy-efficient “Multifunctional Neural Network” (MNN) that uses a variety of different activation functions to effectively improve performance and significantly reduce energy consumption. A generic training algorithm is designed to optimize the weights, biases, and function selections for improving performance while still achieving relatively fast computational time and reducing energy usage. A novel general learning algorithm is developed to train the new energy-efficient MNN. For performance analysis, a new “Genetic Deep Multifunctional Neural Network” (GDMNN) uses genetic algorithms to optimize the weights and biases, and selects the set of best-performing energy-efficient activation functions for all neurons. The results from sufficient simulations indicate that this optimized GDMNN can perform better than other GDMNNs in terms of achieving high performance (prediction accuracy), low energy consumption, and fast training time. Future works include (1) developing more effective energy-efficient learning algorithms for the MNN for data mining application problems, and (2) using parallel cloud computing methods to significantly speed up training the MNN.