Kristina Gorshkova, Victoria Zueva, M. Kuznetsova, L. Tugashova
{"title":"优化神经网络架构中的深度学习方法","authors":"Kristina Gorshkova, Victoria Zueva, M. Kuznetsova, L. Tugashova","doi":"10.15866/ireaco.v14i2.20591","DOIUrl":null,"url":null,"abstract":"Deep neural networks are a powerful tool for computer-assisted learning and have achieved significant success in numerous computer vision and image processing tasks. This paper discusses several new neural network structures that have better performance than the traditional feedforward neural network structure. A method of network structure optimization based on gradient descent and heavy-ball algorithms has been proposed. Furthermore, an approach based on the concept of sparse representation for simultaneous training and optimizing the network structure has been presented. According to CIFAR-10 and CIFAR-100 dataset classification tasks and experimental results, the optimization of ResNet and DenseNet structures using gradient descent and heavy-ball algorithms, accordingly, has been shown to result in better performance with increased depth of neural network. A neural network based on a sparse representation is shown to have the highest performance in all datasets. This strategy encourages quick data adaptation at each iteration. The results obtained can be used to design deeper neural networks with no loss of precision and computing speed.","PeriodicalId":314340,"journal":{"name":"International Review of Automatic Control (IREACO)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Optimizing Deep Learning Methods in Neural Network Architectures\",\"authors\":\"Kristina Gorshkova, Victoria Zueva, M. Kuznetsova, L. Tugashova\",\"doi\":\"10.15866/ireaco.v14i2.20591\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep neural networks are a powerful tool for computer-assisted learning and have achieved significant success in numerous computer vision and image processing tasks. This paper discusses several new neural network structures that have better performance than the traditional feedforward neural network structure. A method of network structure optimization based on gradient descent and heavy-ball algorithms has been proposed. Furthermore, an approach based on the concept of sparse representation for simultaneous training and optimizing the network structure has been presented. According to CIFAR-10 and CIFAR-100 dataset classification tasks and experimental results, the optimization of ResNet and DenseNet structures using gradient descent and heavy-ball algorithms, accordingly, has been shown to result in better performance with increased depth of neural network. A neural network based on a sparse representation is shown to have the highest performance in all datasets. This strategy encourages quick data adaptation at each iteration. The results obtained can be used to design deeper neural networks with no loss of precision and computing speed.\",\"PeriodicalId\":314340,\"journal\":{\"name\":\"International Review of Automatic Control (IREACO)\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-03-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Review of Automatic Control (IREACO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.15866/ireaco.v14i2.20591\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Review of Automatic Control (IREACO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.15866/ireaco.v14i2.20591","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimizing Deep Learning Methods in Neural Network Architectures
Deep neural networks are a powerful tool for computer-assisted learning and have achieved significant success in numerous computer vision and image processing tasks. This paper discusses several new neural network structures that have better performance than the traditional feedforward neural network structure. A method of network structure optimization based on gradient descent and heavy-ball algorithms has been proposed. Furthermore, an approach based on the concept of sparse representation for simultaneous training and optimizing the network structure has been presented. According to CIFAR-10 and CIFAR-100 dataset classification tasks and experimental results, the optimization of ResNet and DenseNet structures using gradient descent and heavy-ball algorithms, accordingly, has been shown to result in better performance with increased depth of neural network. A neural network based on a sparse representation is shown to have the highest performance in all datasets. This strategy encourages quick data adaptation at each iteration. The results obtained can be used to design deeper neural networks with no loss of precision and computing speed.