Deep Residual Network for Image Recognition

Satnam Singh Saini, P. Rawat
{"title":"Deep Residual Network for Image Recognition","authors":"Satnam Singh Saini, P. Rawat","doi":"10.1109/icdcece53908.2022.9792645","DOIUrl":null,"url":null,"abstract":"Training of a neural network is easier than it goes deeper. Deeper architecture makes neural networks more difficult to train because of vanishing gradient and complexity problems, and via this training, deeper neural networks become much time taking and high utilization of computer resources. Introducing residual blocks in neural networks train specifically deeper architecture networks than those used previously. Residual networks gain this achievement by attaching a trip connection to the layers of artificial neural networks. This paper is about showing residual networks and how they work like formulas, we will see residual networks obtain good accuracy, and as well as the model is easier to optimize because Res Net makes training of large structured neural networks more efficient. We will check residual nets on the Image Net dataset with a depth of 152 layers which is 8x more intense than VGG nets yet very less complex. After building this architecture of residual nets gets error up to 3.57% on the Image Net test dataset. We also compare the Res Net result to its equivalent Convolutional Network without residual connection. Our results show that ResNet provides higher accuracy but apart from that, it is more prone to over fitting. Stochastic augmentation of training datasets and adding dropout layers in networks are some of the over fitting prevention methods.","PeriodicalId":417643,"journal":{"name":"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)","volume":"575 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Distributed Computing and Electrical Circuits and Electronics (ICDCECE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icdcece53908.2022.9792645","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5

Abstract

Training of a neural network is easier than it goes deeper. Deeper architecture makes neural networks more difficult to train because of vanishing gradient and complexity problems, and via this training, deeper neural networks become much time taking and high utilization of computer resources. Introducing residual blocks in neural networks train specifically deeper architecture networks than those used previously. Residual networks gain this achievement by attaching a trip connection to the layers of artificial neural networks. This paper is about showing residual networks and how they work like formulas, we will see residual networks obtain good accuracy, and as well as the model is easier to optimize because Res Net makes training of large structured neural networks more efficient. We will check residual nets on the Image Net dataset with a depth of 152 layers which is 8x more intense than VGG nets yet very less complex. After building this architecture of residual nets gets error up to 3.57% on the Image Net test dataset. We also compare the Res Net result to its equivalent Convolutional Network without residual connection. Our results show that ResNet provides higher accuracy but apart from that, it is more prone to over fitting. Stochastic augmentation of training datasets and adding dropout layers in networks are some of the over fitting prevention methods.
图像识别的深度残差网络
神经网络的训练要简单得多。由于梯度消失和复杂性问题,深度结构使得神经网络的训练变得更加困难,并且通过这种训练,深度神经网络变得更加耗时和高计算机资源利用率。在神经网络中引入残差块可以训练出比以前更深层的网络结构。残差网络通过在人工神经网络层上附加一个行程连接来实现这一目标。本文将展示残差网络及其如何像公式一样工作,我们将看到残差网络获得良好的准确性,并且由于Res Net使大型结构化神经网络的训练更有效,因此模型更容易优化。我们将在Image Net数据集上检查残差网,深度为152层,比VGG网强8倍,但非常简单。构建该残差网结构后,在Image Net测试数据集上的误差达到3.57%。我们还将Res Net的结果与没有残差连接的等效卷积网络进行了比较。我们的结果表明,ResNet提供了更高的准确性,但除此之外,它更容易过度拟合。训练数据集的随机增强和在网络中加入dropout层是防止过拟合的方法。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信