艺术风格迁移的深度卷积网络学习分类

Sci. Program. Pub Date : 2022-01-10 DOI:10.1155/2022/2038740
R. D. Kumar, E. G. Julie, Y. H. Robinson, S. Vimal, G. Dhiman, Murugesh Veerasamy
{"title":"艺术风格迁移的深度卷积网络学习分类","authors":"R. D. Kumar, E. G. Julie, Y. H. Robinson, S. Vimal, G. Dhiman, Murugesh Veerasamy","doi":"10.1155/2022/2038740","DOIUrl":null,"url":null,"abstract":"Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"16 1","pages":"2038740:1-2038740:9"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Deep Convolutional Nets Learning Classification for Artistic Style Transfer\",\"authors\":\"R. D. Kumar, E. G. Julie, Y. H. Robinson, S. Vimal, G. Dhiman, Murugesh Veerasamy\",\"doi\":\"10.1155/2022/2038740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.\",\"PeriodicalId\":21628,\"journal\":{\"name\":\"Sci. Program.\",\"volume\":\"16 1\",\"pages\":\"2038740:1-2038740:9\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Sci. Program.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1155/2022/2038740\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Sci. Program.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2022/2038740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

摘要

几十年来,人类已经掌握了创造的技能。复制这一机制的过程是最近通过使用神经网络引入的,神经网络可以复制人脑的功能,其中神经网络中的每个单元代表一个神经元,该神经元将信息从一个神经元传递到另一个神经元,以执行潜意识任务。通常,有一些方法可以将输入图像渲染成著名艺术作品的风格。生成美术的这个问题通常被称为非真实感渲染。以前的方法依赖于直接操纵图像的像素表示。在使用图像识别构建的深度神经网络的同时,本文在表示内容图像更高层次的特征空间中进行了实现。在此之前,深度神经网络被用于对象识别和风格识别,对与创作时间一致的艺术品进行分类。本文使用视觉几何组(VGG16)神经网络来复制人类执行的这一休眠任务。在这里,图像是输入的,其中一个是内容图像,它包含了你想在输出图像中保留的特征,另一个是风格参考图像,它包含了名画的图案或图像,而输入图像需要被风格化,并将它们混合在一起,以产生一个新的图像,输入图像被转换成看起来像内容图像,但“草图”看起来像风格图像。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Deep Convolutional Nets Learning Classification for Artistic Style Transfer
Humans have mastered the skill of creativity for many decades. The process of replicating this mechanism is introduced recently by using neural networks which replicate the functioning of human brain, where each unit in the neural network represents a neuron, which transmits the messages from one neuron to other, to perform subconscious tasks. Usually, there are methods to render an input image in the style of famous art works. This issue of generating art is normally called nonphotorealistic rendering. Previous approaches rely on directly manipulating the pixel representation of the image. While using deep neural networks which are constructed using image recognition, this paper carries out implementations in feature space representing the higher levels of the content image. Previously, deep neural networks are used for object recognition and style recognition to categorize the artworks consistent with the creation time. This paper uses Visual Geometry Group (VGG16) neural network to replicate this dormant task performed by humans. Here, the images are input where one is the content image which contains the features you want to retain in the output image and the style reference image which contains patterns or images of famous paintings and the input image which needs to be style and blend them together to produce a new image where the input image is transformed to look like the content image but “sketched” to look like the style image.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信