{"title":"Sketch-aae: A Seq2Seq Model to Generate Sketch Drawings","authors":"Jia Lu, Xueming Li, Xianlin Zhang","doi":"10.1145/3387168.3387178","DOIUrl":null,"url":null,"abstract":"Sketch plays an important role in human nonverbal communication, which is a superior way to describe specific objects visually. Generating human free-hand sketches has become topical in computer graphics and vision, inspired by various applications related to sketches such as sketch object recognition. Existing methods on sketch generation failed to utilize stroke sequence information of human free-hand sketches. Especially, a recent study proposed an end-to-end variational autoencoder (VAE) model called sketch-rnn which learned to sketch with human input. However, the performance of sketch-rnn is affected by the original input seriously hence decreased its robustness. In this paper, we proposed a sequence-to-sequence model called sketch-aae to generate multiple categories of humanlike sketches of higher quality than sketch-rnn. We achieve this by introducing an adversarial autoencoder (AAE) model, which uses generative adversarial networks (GAN) to improve the robustness of VAE. To our best knowledge, for the first time, the AAE model is used to synthesize sketches. A VGGNet classification model is then formulated to prove the similarity between our generated sketches and human free-hand sketches. Extensive experiments both qualitatively and quantitatively demonstrate that the proposed model is superiority over the state-of-the-art for sketch generation and multi-class sketch classification.","PeriodicalId":346739,"journal":{"name":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Vision, Image and Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3387168.3387178","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
Sketch plays an important role in human nonverbal communication, which is a superior way to describe specific objects visually. Generating human free-hand sketches has become topical in computer graphics and vision, inspired by various applications related to sketches such as sketch object recognition. Existing methods on sketch generation failed to utilize stroke sequence information of human free-hand sketches. Especially, a recent study proposed an end-to-end variational autoencoder (VAE) model called sketch-rnn which learned to sketch with human input. However, the performance of sketch-rnn is affected by the original input seriously hence decreased its robustness. In this paper, we proposed a sequence-to-sequence model called sketch-aae to generate multiple categories of humanlike sketches of higher quality than sketch-rnn. We achieve this by introducing an adversarial autoencoder (AAE) model, which uses generative adversarial networks (GAN) to improve the robustness of VAE. To our best knowledge, for the first time, the AAE model is used to synthesize sketches. A VGGNet classification model is then formulated to prove the similarity between our generated sketches and human free-hand sketches. Extensive experiments both qualitatively and quantitatively demonstrate that the proposed model is superiority over the state-of-the-art for sketch generation and multi-class sketch classification.