Comparison of Deep Generative Models for the Generation of Handwritten Character Images

Ömer Kirbiyik, Enis Simsar, A. Cemgil
{"title":"Comparison of Deep Generative Models for the Generation of Handwritten Character Images","authors":"Ömer Kirbiyik, Enis Simsar, A. Cemgil","doi":"10.1109/SIU.2019.8806416","DOIUrl":null,"url":null,"abstract":"In this study, we compare deep learning methods for generating images of handwritten characters. This problem can be thought of as a restricted Turing test: A human draws a character from any desired alphabet and the system synthesizes images with similar appearances. The intention here is not to merely duplicate the input image but to add random perturbations to give the impression of being human-produced. For this purpose, the images produced by two different generative models (Generative Adversarial Network and Variational Autoencoder) and the related training method (Reptile) are examined with respect to their visual quality in a subjective manner. Also, the capability of transferring the knowledge that is obtained by the model is challenged by using different datasets for the training and test processes. Using the proposed model and meta-learning method, it is possible to produce not only images similar to the ones in the training set but also novel images that belong to a class which is seen for the first time.","PeriodicalId":326275,"journal":{"name":"2019 27th Signal Processing and Communications Applications Conference (SIU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 27th Signal Processing and Communications Applications Conference (SIU)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SIU.2019.8806416","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this study, we compare deep learning methods for generating images of handwritten characters. This problem can be thought of as a restricted Turing test: A human draws a character from any desired alphabet and the system synthesizes images with similar appearances. The intention here is not to merely duplicate the input image but to add random perturbations to give the impression of being human-produced. For this purpose, the images produced by two different generative models (Generative Adversarial Network and Variational Autoencoder) and the related training method (Reptile) are examined with respect to their visual quality in a subjective manner. Also, the capability of transferring the knowledge that is obtained by the model is challenged by using different datasets for the training and test processes. Using the proposed model and meta-learning method, it is possible to produce not only images similar to the ones in the training set but also novel images that belong to a class which is seen for the first time.
手写体字符图像生成的深度生成模型比较
在本研究中,我们比较了用于生成手写字符图像的深度学习方法。这个问题可以被认为是一个受限的图灵测试:一个人从任何想要的字母表中画一个字符,系统合成具有相似外观的图像。这里的目的不仅仅是复制输入图像,而是添加随机扰动,以给人一种人为产生的印象。为此,我们以主观的方式对两种不同的生成模型(生成对抗网络和变分自编码器)和相关的训练方法(爬行动物)产生的图像进行了视觉质量检查。此外,在训练和测试过程中使用不同的数据集,对模型获得的知识的传输能力提出了挑战。使用所提出的模型和元学习方法,不仅可以生成与训练集中的图像相似的图像,还可以生成属于第一次看到的类的新图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信