{"title":"Fully Kernected Neural Networks","authors":"W. Zhang, Zhi Han, Xi’ai Chen, Baicheng Liu, Huidi Jia, Yandong Tang","doi":"10.1155/2023/1539436","DOIUrl":null,"url":null,"abstract":"In this paper, we apply kernel methods to deep convolutional neural network (DCNN) to improve its nonlinear ability. DCNNs have achieved significant improvement in many computer vision tasks. For an image classification task, the accuracy comes to saturation when the depth and width of network are enough and appropriate. The saturation accuracy will not rise even by increasing the depth and width. We find that improving nonlinear ability of DCNNs can break through the saturation accuracy. In a DCNN, the former layer is more inclined to extract features and the latter layer is more inclined to classify features. Therefore, we apply kernel methods at the last fully connected layer to implicitly map features to a higher-dimensional space to improve nonlinear ability so that the network achieves better linear separability. Also, we name the network as fully kernected neural networks (fully connected neural networks with kernel methods). Our experiment result shows that fully kernected neural networks achieve higher classification accuracy and faster convergence rate than baseline networks.","PeriodicalId":43667,"journal":{"name":"Muenster Journal of Mathematics","volume":null,"pages":null},"PeriodicalIF":0.7000,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Muenster Journal of Mathematics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2023/1539436","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0
Abstract
In this paper, we apply kernel methods to deep convolutional neural network (DCNN) to improve its nonlinear ability. DCNNs have achieved significant improvement in many computer vision tasks. For an image classification task, the accuracy comes to saturation when the depth and width of network are enough and appropriate. The saturation accuracy will not rise even by increasing the depth and width. We find that improving nonlinear ability of DCNNs can break through the saturation accuracy. In a DCNN, the former layer is more inclined to extract features and the latter layer is more inclined to classify features. Therefore, we apply kernel methods at the last fully connected layer to implicitly map features to a higher-dimensional space to improve nonlinear ability so that the network achieves better linear separability. Also, we name the network as fully kernected neural networks (fully connected neural networks with kernel methods). Our experiment result shows that fully kernected neural networks achieve higher classification accuracy and faster convergence rate than baseline networks.