Pixel to Stroke Sketch Generation Using Reinforcement Learning

Haizhou Wang, Conrad S. Tucker
{"title":"Pixel to Stroke Sketch Generation Using Reinforcement Learning","authors":"Haizhou Wang, Conrad S. Tucker","doi":"10.1115/detc2019-98481","DOIUrl":null,"url":null,"abstract":"\n Many engineering design tasks involve creating early conceptual sketches that do not require exact dimensions. Although some previous works focus on automatically generating sketches from reference images, many of them output exactly the same objects as the reference images. There are also models that generate sketches from scratch, which can be divided into pixel-based and stroke-based methods. Pixel-based methods generate sketches as a whole, without any information of the strokes, while stroke-based methods generate sketches by outputting strokes in a sequential manner. Pixel-based methods are frequently used to generate realistic color images. Although the pixel-based methods are more popular, stroke-based methods have the advantages to scale to a larger dimension without losing high fidelity. An image generated from stroke-based methods has only strokes on the canvas, resulting in no random noise in the blank areas of the canvas. However, one challenge in the engineering design community is that most of the sketches are saved as pixel-based images. Furthermore, many non-pixel-based methods rely on stroke-based training data, making them ill-suited for generating design conceptual sketches. In order to overcome these limitations, the authors proposed an agent that can learn from pixel-based images and generate stroke-based images. An advantage of such an agent is the ability to utilize pixel-based training data that is abundant in design repositories, to train stroke-based methods that are typically constrained by the lack of access to stroke-based training data.","PeriodicalId":352702,"journal":{"name":"Volume 1: 39th Computers and Information in Engineering Conference","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 1: 39th Computers and Information in Engineering Conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/detc2019-98481","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Many engineering design tasks involve creating early conceptual sketches that do not require exact dimensions. Although some previous works focus on automatically generating sketches from reference images, many of them output exactly the same objects as the reference images. There are also models that generate sketches from scratch, which can be divided into pixel-based and stroke-based methods. Pixel-based methods generate sketches as a whole, without any information of the strokes, while stroke-based methods generate sketches by outputting strokes in a sequential manner. Pixel-based methods are frequently used to generate realistic color images. Although the pixel-based methods are more popular, stroke-based methods have the advantages to scale to a larger dimension without losing high fidelity. An image generated from stroke-based methods has only strokes on the canvas, resulting in no random noise in the blank areas of the canvas. However, one challenge in the engineering design community is that most of the sketches are saved as pixel-based images. Furthermore, many non-pixel-based methods rely on stroke-based training data, making them ill-suited for generating design conceptual sketches. In order to overcome these limitations, the authors proposed an agent that can learn from pixel-based images and generate stroke-based images. An advantage of such an agent is the ability to utilize pixel-based training data that is abundant in design repositories, to train stroke-based methods that are typically constrained by the lack of access to stroke-based training data.
使用强化学习生成像素到笔画草图
许多工程设计任务包括创建不需要精确尺寸的早期概念草图。虽然以前的一些工作侧重于从参考图像自动生成草图,但其中许多工作输出的对象与参考图像完全相同。也有从头开始生成草图的模型,可以分为基于像素和基于笔画的方法。基于像素的方法生成的草图是一个整体,没有任何笔画的信息,而基于笔画的方法生成草图是通过连续输出笔画的方式生成的。基于像素的方法经常用于生成逼真的彩色图像。尽管基于像素的方法更受欢迎,但基于笔画的方法具有缩放到更大维度而不失去高保真度的优点。由基于笔画的方法生成的图像在画布上只有笔画,因此在画布的空白区域没有随机噪声。然而,工程设计界面临的一个挑战是,大多数草图都是以像素为基础的图像保存的。此外,许多非基于像素的方法依赖于基于笔画的训练数据,这使得它们不适合生成设计概念草图。为了克服这些限制,作者提出了一种可以从基于像素的图像中学习并生成基于笔画的图像的智能体。这种代理的一个优点是能够利用设计存储库中丰富的基于像素的训练数据来训练基于笔画的方法,这些方法通常受到缺乏基于笔画的训练数据访问的限制。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信