Haijing Pan , Adzrool Idzwan bin Ismail , Asmidah Alwi , Massudi Mahmuddin
{"title":"The application and optimization of style transfer neural network based on deep learning in fashion design","authors":"Haijing Pan , Adzrool Idzwan bin Ismail , Asmidah Alwi , Massudi Mahmuddin","doi":"10.1016/j.sasc.2025.200277","DOIUrl":null,"url":null,"abstract":"<div><h3>Introduction</h3><div>With the rapid advancement of deep learning technologies, style transfer networks have demonstrated significant potential in the fields of image processing and creative design. Particularly in the realm of fashion design, style transfer techniques offer designers innovative tools to automatically generate diverse style designs, thereby enhancing creativity and diversity. However, existing style transfer methods still face challenges in balancing content preservation and style representation, as well as in computational efficiency. This study aims to explore a Neural Style Transfer (NST)-based model for fashion style transfer to address these issues and improve the efficiency and quality of fashion design.</div></div><div><h3>Methodology</h3><div>The proposed network architecture consists of three convolutional layers and one deconvolutional layer, designed to efficiently extract and integrate spatial features of fashion elements. Subsequently, the Visual Geometry Group (VGG)-Garment network architecture is employed for feature extraction and style fusion, with optimization algorithms generating high-quality fashion design images. Additionally, by introducing four semantic loss functions—content loss, style loss, color loss, and contour loss—the model ensures the preservation of the original design content while flexibly incorporating other visual styles.</div></div><div><h3>Results</h3><div>The experimental results demonstrate the following: (1) The proposed model excels in both style transfer effectiveness and computational efficiency. The style retention rate ranges from 82.11 % to 88.54 %. The content retention rate falls between 87.90 % and 92.56 %. These results indicate that the model effectively integrates diverse style elements while preserving the original design. (2) The proposed method outperforms three other models in terms of Peak Signal-to-Noise Ratio (PSNR) across all six fashion styles. Notably, in the \"luxury\" style, the PSNR value of the proposed method reaches 32.01, significantly higher than that of other models. (3) In terms of computational efficiency, the model generates a style-transferred fashion design image in an average of 15.23 s. The storage footprint is 251.45 MB, and the computational resource utilization rate is 60.78 %. These results show a significant improvement over traditional method.</div></div><div><h3>Discussion</h3><div>This study makes a significant contribution by proposing a model that enhances visual effects and design diversity. Additionally, it outperforms traditional methods in computational efficiency and resource utilization. This model provides a novel technical approach for the fashion design industry, effectively reducing design costs and enhancing the overall efficiency of the design process.</div></div>","PeriodicalId":101205,"journal":{"name":"Systems and Soft Computing","volume":"7 ","pages":"Article 200277"},"PeriodicalIF":0.0000,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Systems and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S277294192500095X","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Introduction
With the rapid advancement of deep learning technologies, style transfer networks have demonstrated significant potential in the fields of image processing and creative design. Particularly in the realm of fashion design, style transfer techniques offer designers innovative tools to automatically generate diverse style designs, thereby enhancing creativity and diversity. However, existing style transfer methods still face challenges in balancing content preservation and style representation, as well as in computational efficiency. This study aims to explore a Neural Style Transfer (NST)-based model for fashion style transfer to address these issues and improve the efficiency and quality of fashion design.
Methodology
The proposed network architecture consists of three convolutional layers and one deconvolutional layer, designed to efficiently extract and integrate spatial features of fashion elements. Subsequently, the Visual Geometry Group (VGG)-Garment network architecture is employed for feature extraction and style fusion, with optimization algorithms generating high-quality fashion design images. Additionally, by introducing four semantic loss functions—content loss, style loss, color loss, and contour loss—the model ensures the preservation of the original design content while flexibly incorporating other visual styles.
Results
The experimental results demonstrate the following: (1) The proposed model excels in both style transfer effectiveness and computational efficiency. The style retention rate ranges from 82.11 % to 88.54 %. The content retention rate falls between 87.90 % and 92.56 %. These results indicate that the model effectively integrates diverse style elements while preserving the original design. (2) The proposed method outperforms three other models in terms of Peak Signal-to-Noise Ratio (PSNR) across all six fashion styles. Notably, in the "luxury" style, the PSNR value of the proposed method reaches 32.01, significantly higher than that of other models. (3) In terms of computational efficiency, the model generates a style-transferred fashion design image in an average of 15.23 s. The storage footprint is 251.45 MB, and the computational resource utilization rate is 60.78 %. These results show a significant improvement over traditional method.
Discussion
This study makes a significant contribution by proposing a model that enhances visual effects and design diversity. Additionally, it outperforms traditional methods in computational efficiency and resource utilization. This model provides a novel technical approach for the fashion design industry, effectively reducing design costs and enhancing the overall efficiency of the design process.