Generating Pedestrian Training Dataset using DCGAN

Daeun Dana Kim, Muhammad Tanseef Shahid, Yunseong Kim, Won Jun Lee, H. Song, F. Piccialli, K. Choi
{"title":"Generating Pedestrian Training Dataset using DCGAN","authors":"Daeun Dana Kim, Muhammad Tanseef Shahid, Yunseong Kim, Won Jun Lee, H. Song, F. Piccialli, K. Choi","doi":"10.1145/3373419.3373458","DOIUrl":null,"url":null,"abstract":"Recently, as autonomous cars are developing very fast, it is the most crucial task to detect pedestrians for autonomous driving. Convolution neural network based on pedestrian detection models has gained enormous success in many applications. However, these models need a large amount of annotated and labeled datasets for training process which requires lots of time and human effort. For training samples, the diversity and quantity of datasets are very important. The proposed framework is based on Deep Convolutional Generative Adversarial Networks (DCGAN), able to generate realistic pedestrians. Experimental results show that DCGAN framework is able to synthesize real pedestrian images with diversity. The synthesized samples can be included in training data to improve the performance of pedestrian detectors. 24,770 images including PETA dataset, Inria dataset were used for the training process.","PeriodicalId":352528,"journal":{"name":"Proceedings of the 2019 3rd International Conference on Advances in Image Processing","volume":"128 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 3rd International Conference on Advances in Image Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3373419.3373458","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

Abstract

Recently, as autonomous cars are developing very fast, it is the most crucial task to detect pedestrians for autonomous driving. Convolution neural network based on pedestrian detection models has gained enormous success in many applications. However, these models need a large amount of annotated and labeled datasets for training process which requires lots of time and human effort. For training samples, the diversity and quantity of datasets are very important. The proposed framework is based on Deep Convolutional Generative Adversarial Networks (DCGAN), able to generate realistic pedestrians. Experimental results show that DCGAN framework is able to synthesize real pedestrian images with diversity. The synthesized samples can be included in training data to improve the performance of pedestrian detectors. 24,770 images including PETA dataset, Inria dataset were used for the training process.
使用DCGAN生成行人训练数据集
近年来,随着自动驾驶汽车的快速发展,行人检测成为自动驾驶汽车最关键的任务。基于卷积神经网络的行人检测模型在许多应用中取得了巨大的成功。然而,这些模型在训练过程中需要大量带注释和标记的数据集,这需要大量的时间和人力。对于训练样本,数据集的多样性和数量是非常重要的。该框架基于深度卷积生成对抗网络(DCGAN),能够生成真实的行人。实验结果表明,DCGAN框架能够合成具有多样性的真实行人图像。合成的样本可以用于训练数据,以提高行人检测器的性能。训练过程使用了包括PETA数据集、Inria数据集在内的24,770张图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信