{"title":"Deep Residual Network for Image Recognition","authors":"Satnam Singh Saini, P. Rawat","doi":"10.1109/icdcece53908.2022.9792645","DOIUrl":null,"url":null,"abstract":"Training of a neural network is easier than it goes deeper. Deeper architecture makes neural networks more difficult to train because of vanishing gradient and complexity problems, and via this training, deeper neural networks become much time taking and high utilization of computer resources. Introducing residual blocks in neural networks train specifically deeper architecture networks than those used previously. Residual networks gain this achievement by attaching a trip connection to the layers of artificial neural networks. This paper is about showing residual networks and how they work like formulas, we will see residual networks obtain good accuracy, and as well as the model is easier to optimize because Res Net makes training of large structured neural networks more efficient. We will check residual nets on the Image Net dataset with a depth of 152 layers which is 8x more intense than VGG nets yet very less complex. After building this architecture of residual nets gets error up to 3.57% on the Image Net test dataset. We also compare the Res Net result to its equivalent Convolutional Network without residual connection. Our results show that ResNet provides higher accuracy but apart from that, it is more prone to over fitting. Stochastic augmentation of training datasets and adding dropout layers in networks are some of the over fitting prevention methods.","PeriodicalId":417643,"journal":{"name":"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)","volume":"575 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icdcece53908.2022.9792645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Training of a neural network is easier than it goes deeper. Deeper architecture makes neural networks more difficult to train because of vanishing gradient and complexity problems, and via this training, deeper neural networks become much time taking and high utilization of computer resources. Introducing residual blocks in neural networks train specifically deeper architecture networks than those used previously. Residual networks gain this achievement by attaching a trip connection to the layers of artificial neural networks. This paper is about showing residual networks and how they work like formulas, we will see residual networks obtain good accuracy, and as well as the model is easier to optimize because Res Net makes training of large structured neural networks more efficient. We will check residual nets on the Image Net dataset with a depth of 152 layers which is 8x more intense than VGG nets yet very less complex. After building this architecture of residual nets gets error up to 3.57% on the Image Net test dataset. We also compare the Res Net result to its equivalent Convolutional Network without residual connection. Our results show that ResNet provides higher accuracy but apart from that, it is more prone to over fitting. Stochastic augmentation of training datasets and adding dropout layers in networks are some of the over fitting prevention methods.