Deep Convolutional Neural Networks for Scene Understanding: A Study of Semantic Segmentation Models

Malvi Mungalpara, Priyanka Goradia, Trisha Baldha, Yanvi Soni
{"title":"Deep Convolutional Neural Networks for Scene Understanding: A Study of Semantic Segmentation Models","authors":"Malvi Mungalpara, Priyanka Goradia, Trisha Baldha, Yanvi Soni","doi":"10.1109/aimv53313.2021.9670955","DOIUrl":null,"url":null,"abstract":"Semantic Image Segmentation for autonomous cars is gaining a lot of popularity in recent times with researchers trying to improvise the model as much as possible. In this paper, we have compared three models, UNet, VGG16_FCN and ResNet50_FCN, which are used for semantic image segmentation. We have trained and tested these models on the cityscape dataset where the models classify each pixel of the image into various classes. Results show that the class-wise accuracy of ResNet50_FCN is more than the other two models. We have also plotted IoU graphs for each model and we found out that ResNet50_FCN and VGG16_FCN have much better scores than the UNet model. Based on these results, we have shown that ResNet50_FCN outperforms the other two models for the case of semantic segmentation for scene understanding.","PeriodicalId":135318,"journal":{"name":"2021 International Conference on Artificial Intelligence and Machine Vision (AIMV)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 International Conference on Artificial Intelligence and Machine Vision (AIMV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/aimv53313.2021.9670955","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Semantic Image Segmentation for autonomous cars is gaining a lot of popularity in recent times with researchers trying to improvise the model as much as possible. In this paper, we have compared three models, UNet, VGG16_FCN and ResNet50_FCN, which are used for semantic image segmentation. We have trained and tested these models on the cityscape dataset where the models classify each pixel of the image into various classes. Results show that the class-wise accuracy of ResNet50_FCN is more than the other two models. We have also plotted IoU graphs for each model and we found out that ResNet50_FCN and VGG16_FCN have much better scores than the UNet model. Based on these results, we have shown that ResNet50_FCN outperforms the other two models for the case of semantic segmentation for scene understanding.
用于场景理解的深度卷积神经网络:语义分割模型的研究
近年来,自动驾驶汽车的语义图像分割越来越受欢迎,研究人员试图尽可能地对模型进行即兴创作。本文对UNet、VGG16_FCN和ResNet50_FCN三种用于语义图像分割的模型进行了比较。我们在城市景观数据集上训练和测试了这些模型,其中模型将图像的每个像素分类为不同的类。结果表明,ResNet50_FCN的分类准确率高于其他两种模型。我们还绘制了每个模型的IoU图,我们发现ResNet50_FCN和VGG16_FCN的分数比UNet模型好得多。基于这些结果,我们已经证明ResNet50_FCN在场景理解的语义分割情况下优于其他两种模型。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信