Research and Implementation of Multi-scene Image Semantic Segmentation based on Fully Convolutional Neural Network

F. Yu
{"title":"Research and Implementation of Multi-scene Image Semantic Segmentation based on Fully Convolutional Neural Network","authors":"F. Yu","doi":"10.2991/ICMEIT-19.2019.27","DOIUrl":null,"url":null,"abstract":"With the rapid development of deep neural networks, image recognition and segmentation are important research issues in computer vision in recent years. This paper proposes an image semantic segmentation method based on Fully Convolutional Networks (FCN), which combines the deconvolution layer and convolutional layer converted from the fully connected layer in the traditional Convolutional Neural Networks (CNN). The multi-scene image data set of the label is model-trained, and the training model is applied to pixel-level segmentation of images containing different targets, and the test results are visualized by writing test modules and the segmentation results of the test set images are colored. The experimental process uses two training modes with different parameters to achieve faster and better convergence, and Mini Batch also are used to adapt to the training of big data sets during training. Finally, through the comparison between the segmentation results of test set and the Ground Truth image, it is proved that the full convolutional neural network training model has a higher validity and Robustness for segmentation of some targets in different scene images.","PeriodicalId":223458,"journal":{"name":"Proceedings of the 3rd International Conference on Mechatronics Engineering and Information Technology (ICMEIT 2019)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Mechatronics Engineering and Information Technology (ICMEIT 2019)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2991/ICMEIT-19.2019.27","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

With the rapid development of deep neural networks, image recognition and segmentation are important research issues in computer vision in recent years. This paper proposes an image semantic segmentation method based on Fully Convolutional Networks (FCN), which combines the deconvolution layer and convolutional layer converted from the fully connected layer in the traditional Convolutional Neural Networks (CNN). The multi-scene image data set of the label is model-trained, and the training model is applied to pixel-level segmentation of images containing different targets, and the test results are visualized by writing test modules and the segmentation results of the test set images are colored. The experimental process uses two training modes with different parameters to achieve faster and better convergence, and Mini Batch also are used to adapt to the training of big data sets during training. Finally, through the comparison between the segmentation results of test set and the Ground Truth image, it is proved that the full convolutional neural network training model has a higher validity and Robustness for segmentation of some targets in different scene images.
基于全卷积神经网络的多场景图像语义分割研究与实现
随着深度神经网络的迅速发展,图像识别与分割是近年来计算机视觉领域的重要研究课题。本文提出了一种基于全卷积网络(Fully Convolutional Networks, FCN)的图像语义分割方法,该方法将传统卷积神经网络(Convolutional Neural Networks, CNN)中的全连接层转化为反卷积层和卷积层相结合。对标签的多场景图像数据集进行模型训练,将训练模型应用于包含不同目标的图像的像素级分割,通过编写测试模块将测试结果可视化,并对测试集图像的分割结果进行着色。实验过程中使用了两种不同参数的训练模式来达到更快更好的收敛,并且在训练过程中还使用了Mini Batch来适应大数据集的训练。最后,通过对比测试集和Ground Truth图像的分割结果,证明了全卷积神经网络训练模型对不同场景图像中某些目标的分割具有更高的有效性和鲁棒性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信