The application and optimization of style transfer neural network based on deep learning in fashion design

Haijing Pan , Adzrool Idzwan bin Ismail , Asmidah Alwi , Massudi Mahmuddin
{"title":"The application and optimization of style transfer neural network based on deep learning in fashion design","authors":"Haijing Pan ,&nbsp;Adzrool Idzwan bin Ismail ,&nbsp;Asmidah Alwi ,&nbsp;Massudi Mahmuddin","doi":"10.1016/j.sasc.2025.200277","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>With the rapid advancement of deep learning technologies, style transfer networks have demonstrated significant potential in the fields of image processing and creative design. Particularly in the realm of fashion design, style transfer techniques offer designers innovative tools to automatically generate diverse style designs, thereby enhancing creativity and diversity. However, existing style transfer methods still face challenges in balancing content preservation and style representation, as well as in computational efficiency. This study aims to explore a Neural Style Transfer (NST)-based model for fashion style transfer to address these issues and improve the efficiency and quality of fashion design.</div></div><div><h3>Methodology</h3><div>The proposed network architecture consists of three convolutional layers and one deconvolutional layer, designed to efficiently extract and integrate spatial features of fashion elements. Subsequently, the Visual Geometry Group (VGG)-Garment network architecture is employed for feature extraction and style fusion, with optimization algorithms generating high-quality fashion design images. Additionally, by introducing four semantic loss functions—content loss, style loss, color loss, and contour loss—the model ensures the preservation of the original design content while flexibly incorporating other visual styles.</div></div><div><h3>Results</h3><div>The experimental results demonstrate the following: (1) The proposed model excels in both style transfer effectiveness and computational efficiency. The style retention rate ranges from 82.11 % to 88.54 %. The content retention rate falls between 87.90 % and 92.56 %. These results indicate that the model effectively integrates diverse style elements while preserving the original design. (2) The proposed method outperforms three other models in terms of Peak Signal-to-Noise Ratio (PSNR) across all six fashion styles. Notably, in the \"luxury\" style, the PSNR value of the proposed method reaches 32.01, significantly higher than that of other models. (3) In terms of computational efficiency, the model generates a style-transferred fashion design image in an average of 15.23 s. The storage footprint is 251.45 MB, and the computational resource utilization rate is 60.78 %. These results show a significant improvement over traditional method.</div></div><div><h3>Discussion</h3><div>This study makes a significant contribution by proposing a model that enhances visual effects and design diversity. Additionally, it outperforms traditional methods in computational efficiency and resource utilization. This model provides a novel technical approach for the fashion design industry, effectively reducing design costs and enhancing the overall efficiency of the design process.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200277"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S277294192500095X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Introduction

With the rapid advancement of deep learning technologies, style transfer networks have demonstrated significant potential in the fields of image processing and creative design. Particularly in the realm of fashion design, style transfer techniques offer designers innovative tools to automatically generate diverse style designs, thereby enhancing creativity and diversity. However, existing style transfer methods still face challenges in balancing content preservation and style representation, as well as in computational efficiency. This study aims to explore a Neural Style Transfer (NST)-based model for fashion style transfer to address these issues and improve the efficiency and quality of fashion design.

Methodology

The proposed network architecture consists of three convolutional layers and one deconvolutional layer, designed to efficiently extract and integrate spatial features of fashion elements. Subsequently, the Visual Geometry Group (VGG)-Garment network architecture is employed for feature extraction and style fusion, with optimization algorithms generating high-quality fashion design images. Additionally, by introducing four semantic loss functions—content loss, style loss, color loss, and contour loss—the model ensures the preservation of the original design content while flexibly incorporating other visual styles.

Results

The experimental results demonstrate the following: (1) The proposed model excels in both style transfer effectiveness and computational efficiency. The style retention rate ranges from 82.11 % to 88.54 %. The content retention rate falls between 87.90 % and 92.56 %. These results indicate that the model effectively integrates diverse style elements while preserving the original design. (2) The proposed method outperforms three other models in terms of Peak Signal-to-Noise Ratio (PSNR) across all six fashion styles. Notably, in the "luxury" style, the PSNR value of the proposed method reaches 32.01, significantly higher than that of other models. (3) In terms of computational efficiency, the model generates a style-transferred fashion design image in an average of 15.23 s. The storage footprint is 251.45 MB, and the computational resource utilization rate is 60.78 %. These results show a significant improvement over traditional method.

Discussion

This study makes a significant contribution by proposing a model that enhances visual effects and design diversity. Additionally, it outperforms traditional methods in computational efficiency and resource utilization. This model provides a novel technical approach for the fashion design industry, effectively reducing design costs and enhancing the overall efficiency of the design process.
基于深度学习的风格传递神经网络在服装设计中的应用与优化
随着深度学习技术的快速发展,风格迁移网络在图像处理和创意设计领域显示出巨大的潜力。特别是在服装设计领域,风格转移技术为设计师提供了创新的工具,自动生成多样化的风格设计,从而增强了创造力和多样性。然而,现有的风格迁移方法在平衡内容保存和风格表示以及计算效率方面仍然面临挑战。本研究旨在探索一种基于神经风格迁移(NST)的时尚风格迁移模型,以解决这些问题,提高时尚设计的效率和质量。方法提出的网络结构由三个卷积层和一个反卷积层组成,旨在有效地提取和整合时尚元素的空间特征。随后,利用视觉几何群(VGG)-服装网络架构进行特征提取和风格融合,优化算法生成高质量的服装设计图像。此外,该模型通过引入内容丢失、风格丢失、颜色丢失和轮廓丢失四个语义丢失函数,在保留原有设计内容的同时,灵活地融入其他视觉风格。结果实验结果表明:(1)所提出的模型在风格迁移有效性和计算效率方面都有较好的表现。样式保持率为82.11% ~ 88.54%。内容保留率在87.90% ~ 92.56%之间。这些结果表明,该模型在保留原有设计的基础上,有效地融合了多种风格元素。(2)该方法在所有六种时尚风格的峰值信噪比(PSNR)方面优于其他三种模型。值得注意的是,在“豪华”风格下,本文方法的PSNR值达到32.01,显著高于其他模型。(3)在计算效率方面,模型生成一个风格转换的服装设计形象平均需要15.23 s。存储空间占用为251.45 MB,计算资源利用率为60.78%。这些结果与传统方法相比有了显著的改进。本研究通过提出一个增强视觉效果和设计多样性的模型做出了重大贡献。此外,它在计算效率和资源利用率方面优于传统方法。该模型为服装设计行业提供了一种新颖的技术途径,有效地降低了设计成本,提高了设计过程的整体效率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
2.20
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信