进化最优卷积神经网络

Subhashis Banerjee, S. Mitra
{"title":"进化最优卷积神经网络","authors":"Subhashis Banerjee, S. Mitra","doi":"10.1109/SSCI47803.2020.9308201","DOIUrl":null,"url":null,"abstract":"Among the different Deep Learning (DL) models, the deep Convolutional Neural Networks (CNNs) have demonstrated impressive performance in a variety of image recognition or classification tasks. Although CNNs do not require feature engineering or manual extraction of features at the input level, yet designing a suitable CNN architecture necessitates considerable expert knowledge involving enormous amount of trial-and-error activities. In this paper we attempt to automatically design a competitive CNN architecture for a given problem while consuming reasonable machine resource(s) based on a modified version of Cartesian Genetic Programming (CGP). As CGP uses only the mutation operator to generate offsprings it typically evolves slowly. We develop a new algorithm which introduces crossover to the standard CGP to generate an optimal CNN architecture. The genotype encoding scheme is changed from integer to floating-point representation for this purpose. The function genes in the nodes of the CGP are chosen as the highly functional modules of CNN. Typically CNNs use convolution and pooling, followed by activation. Rather than using each of them separately as a function gene for a node, we combine them in a novel way to construct highly functional modules. Five types of functions, called ConvBlock, average pooling, max pooling, summation, and concatenation, were considered. We test our method on an image classification dataset CIFAR10, since it is being used as the benchmark for many similar problems. Experiments demonstrate that the proposed scheme converges fast and automatically finds the competitive CNN architecture as compared to state-of-the-art solutions which require thousands of generations or GPUs involving huge computational burden.","PeriodicalId":413489,"journal":{"name":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Evolving Optimal Convolutional Neural Networks\",\"authors\":\"Subhashis Banerjee, S. Mitra\",\"doi\":\"10.1109/SSCI47803.2020.9308201\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Among the different Deep Learning (DL) models, the deep Convolutional Neural Networks (CNNs) have demonstrated impressive performance in a variety of image recognition or classification tasks. Although CNNs do not require feature engineering or manual extraction of features at the input level, yet designing a suitable CNN architecture necessitates considerable expert knowledge involving enormous amount of trial-and-error activities. In this paper we attempt to automatically design a competitive CNN architecture for a given problem while consuming reasonable machine resource(s) based on a modified version of Cartesian Genetic Programming (CGP). As CGP uses only the mutation operator to generate offsprings it typically evolves slowly. We develop a new algorithm which introduces crossover to the standard CGP to generate an optimal CNN architecture. The genotype encoding scheme is changed from integer to floating-point representation for this purpose. The function genes in the nodes of the CGP are chosen as the highly functional modules of CNN. Typically CNNs use convolution and pooling, followed by activation. Rather than using each of them separately as a function gene for a node, we combine them in a novel way to construct highly functional modules. Five types of functions, called ConvBlock, average pooling, max pooling, summation, and concatenation, were considered. We test our method on an image classification dataset CIFAR10, since it is being used as the benchmark for many similar problems. Experiments demonstrate that the proposed scheme converges fast and automatically finds the competitive CNN architecture as compared to state-of-the-art solutions which require thousands of generations or GPUs involving huge computational burden.\",\"PeriodicalId\":413489,\"journal\":{\"name\":\"2020 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"volume\":\"13 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE Symposium Series on Computational Intelligence (SSCI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SSCI47803.2020.9308201\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE Symposium Series on Computational Intelligence (SSCI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SSCI47803.2020.9308201","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

在不同的深度学习(DL)模型中,深度卷积神经网络(cnn)在各种图像识别或分类任务中表现出令人印象深刻的性能。虽然CNN不需要特征工程或在输入级手动提取特征,但设计一个合适的CNN架构需要大量的专家知识,包括大量的试错活动。在本文中,我们试图基于改进版的笛卡尔遗传规划(CGP),在消耗合理机器资源的同时,为给定问题自动设计一个有竞争力的CNN架构。由于CGP只使用突变算子来产生后代,因此它通常进化缓慢。我们开发了一种新的算法,在标准CGP中引入交叉,以生成最优的CNN架构。为此,基因型编码方案从整数表示改为浮点表示。选择CGP节点中的功能基因作为CNN的高功能模块。cnn通常使用卷积和池化,然后是激活。我们不是单独使用它们中的每一个作为节点的功能基因,而是以一种新颖的方式将它们组合起来构建高功能模块。考虑了五种类型的函数,称为ConvBlock,平均池化,最大池化,求和和连接。我们在图像分类数据集CIFAR10上测试了我们的方法,因为它被用作许多类似问题的基准。实验表明,与目前需要数千代或gpu计算量巨大的解决方案相比,该方案收敛速度快,能自动找到具有竞争力的CNN架构。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Evolving Optimal Convolutional Neural Networks
Among the different Deep Learning (DL) models, the deep Convolutional Neural Networks (CNNs) have demonstrated impressive performance in a variety of image recognition or classification tasks. Although CNNs do not require feature engineering or manual extraction of features at the input level, yet designing a suitable CNN architecture necessitates considerable expert knowledge involving enormous amount of trial-and-error activities. In this paper we attempt to automatically design a competitive CNN architecture for a given problem while consuming reasonable machine resource(s) based on a modified version of Cartesian Genetic Programming (CGP). As CGP uses only the mutation operator to generate offsprings it typically evolves slowly. We develop a new algorithm which introduces crossover to the standard CGP to generate an optimal CNN architecture. The genotype encoding scheme is changed from integer to floating-point representation for this purpose. The function genes in the nodes of the CGP are chosen as the highly functional modules of CNN. Typically CNNs use convolution and pooling, followed by activation. Rather than using each of them separately as a function gene for a node, we combine them in a novel way to construct highly functional modules. Five types of functions, called ConvBlock, average pooling, max pooling, summation, and concatenation, were considered. We test our method on an image classification dataset CIFAR10, since it is being used as the benchmark for many similar problems. Experiments demonstrate that the proposed scheme converges fast and automatically finds the competitive CNN architecture as compared to state-of-the-art solutions which require thousands of generations or GPUs involving huge computational burden.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信