{"title":"基于矢量的监督卷积神经网络反向传播算法","authors":"Nesrine Wagaa, H. Kallel","doi":"10.1109/ICCAD49821.2020.9260520","DOIUrl":null,"url":null,"abstract":"The primary goal of this paper is to analyze the impact of the convolution operation on the model performance. In this context, to avoid the mathematical complexities behind the Convolution Neural Network (CNN) model, the classical convolution operation is substituted by a new proposed matrix operation. The model considered is composed of one convolution layer in series with a set of fully connected hidden layers. The network parameters (filters, weights, and biases) are updated using the back propagation gradient descent algorithm. The model performance is improved through the variation of the width and height CNN hyper-parameters. MNIST data are considered here for the classification of handwritten numbers. With a simple modification of the CNN hyper-parameters using the new proposed matrix operation, a CNN performance of 98.83% was achieved.","PeriodicalId":270320,"journal":{"name":"2020 International Conference on Control, Automation and Diagnosis (ICCAD)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Vector-Based Back Propagation Algorithm of Supervised Convolution Neural Network\",\"authors\":\"Nesrine Wagaa, H. Kallel\",\"doi\":\"10.1109/ICCAD49821.2020.9260520\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The primary goal of this paper is to analyze the impact of the convolution operation on the model performance. In this context, to avoid the mathematical complexities behind the Convolution Neural Network (CNN) model, the classical convolution operation is substituted by a new proposed matrix operation. The model considered is composed of one convolution layer in series with a set of fully connected hidden layers. The network parameters (filters, weights, and biases) are updated using the back propagation gradient descent algorithm. The model performance is improved through the variation of the width and height CNN hyper-parameters. MNIST data are considered here for the classification of handwritten numbers. With a simple modification of the CNN hyper-parameters using the new proposed matrix operation, a CNN performance of 98.83% was achieved.\",\"PeriodicalId\":270320,\"journal\":{\"name\":\"2020 International Conference on Control, Automation and Diagnosis (ICCAD)\",\"volume\":\"68 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 International Conference on Control, Automation and Diagnosis (ICCAD)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCAD49821.2020.9260520\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 International Conference on Control, Automation and Diagnosis (ICCAD)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCAD49821.2020.9260520","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Vector-Based Back Propagation Algorithm of Supervised Convolution Neural Network
The primary goal of this paper is to analyze the impact of the convolution operation on the model performance. In this context, to avoid the mathematical complexities behind the Convolution Neural Network (CNN) model, the classical convolution operation is substituted by a new proposed matrix operation. The model considered is composed of one convolution layer in series with a set of fully connected hidden layers. The network parameters (filters, weights, and biases) are updated using the back propagation gradient descent algorithm. The model performance is improved through the variation of the width and height CNN hyper-parameters. MNIST data are considered here for the classification of handwritten numbers. With a simple modification of the CNN hyper-parameters using the new proposed matrix operation, a CNN performance of 98.83% was achieved.