Font Design Method Based on Multi-scale CycleGAN

Yan Pan, Gang Liu, Xinyun Wu, Changlin Chen, Zhenghao Zhou, Xin Liu
{"title":"Font Design Method Based on Multi-scale CycleGAN","authors":"Yan Pan, Gang Liu, Xinyun Wu, Changlin Chen, Zhenghao Zhou, Xin Liu","doi":"10.1109/icicse55337.2022.9828945","DOIUrl":null,"url":null,"abstract":"Font design is an important research direction in art design and has high commercial value. It requires professionals to design fonts, which is not only time-consuming and costly, but also inefficient. Font-to-font translation is a commonly used font design method. Font-to-font translation is essentially the problem of image synthesis. Currently, generative adversarial networks (GANs) have been used for image synthesis and achieved some results. However, for the task of font-to-font translation the existing methods based on GANs generally have low-quality visual effects, such as incomplete fonts and distortion of font details. In order to solve the above problems, we propose a more effective multi-scale CycleGAN for font-to-font translation and the proposed method can obtain the font images with better visual quality. The proposed method is called MSM-CycleGAN. In MSM-CycleGAN, a U-net with multiple outputs (UM) is used as the generator. UM outputs the generated images of multiple scales. And then the outputs of UM are fed into the multi-scale discriminator. Our model uses the unsupervised learning method. This multi-scale discrimination method effectively improves the detailed information of the generated image. Experimental results show that our method performs better than other state-of-the-art image synthesis methods, and can obtain the font images with higher visual quality.","PeriodicalId":177985,"journal":{"name":"2022 IEEE 2nd International Conference on Information Communication and Software Engineering (ICICSE)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 2nd International Conference on Information Communication and Software Engineering (ICICSE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/icicse55337.2022.9828945","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Font design is an important research direction in art design and has high commercial value. It requires professionals to design fonts, which is not only time-consuming and costly, but also inefficient. Font-to-font translation is a commonly used font design method. Font-to-font translation is essentially the problem of image synthesis. Currently, generative adversarial networks (GANs) have been used for image synthesis and achieved some results. However, for the task of font-to-font translation the existing methods based on GANs generally have low-quality visual effects, such as incomplete fonts and distortion of font details. In order to solve the above problems, we propose a more effective multi-scale CycleGAN for font-to-font translation and the proposed method can obtain the font images with better visual quality. The proposed method is called MSM-CycleGAN. In MSM-CycleGAN, a U-net with multiple outputs (UM) is used as the generator. UM outputs the generated images of multiple scales. And then the outputs of UM are fed into the multi-scale discriminator. Our model uses the unsupervised learning method. This multi-scale discrimination method effectively improves the detailed information of the generated image. Experimental results show that our method performs better than other state-of-the-art image synthesis methods, and can obtain the font images with higher visual quality.
基于多尺度CycleGAN的字体设计方法
字体设计是艺术设计中的一个重要研究方向,具有很高的商业价值。它需要专业人士来设计字体,这不仅耗时昂贵,而且效率低下。字体到字体的翻译是一种常用的字体设计方法。字体到字体的翻译本质上是图像合成问题。目前,生成对抗网络(GANs)已被用于图像合成,并取得了一定的效果。然而,对于字体到字体的翻译任务,现有的基于gan的方法普遍存在字体不完整、字体细节失真等低质量的视觉效果。为了解决上述问题,我们提出了一种更有效的多尺度CycleGAN字体到字体的翻译方法,该方法可以获得视觉质量更好的字体图像。所提出的方法被称为MSM-CycleGAN。在MSM-CycleGAN中,使用具有多输出(UM)的U-net作为发生器。UM输出生成的多个尺度的图像。然后将UM的输出送入多尺度鉴别器。我们的模型使用无监督学习方法。这种多尺度判别方法有效地提高了生成图像的细节信息。实验结果表明,该方法比其他先进的图像合成方法性能更好,可以获得具有更高视觉质量的字体图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信