一种新的深度学习模型用于时尚行业图像说明的准确预测

Pulkit Dwivedi, Anushka Upadhyaya
{"title":"一种新的深度学习模型用于时尚行业图像说明的准确预测","authors":"Pulkit Dwivedi, Anushka Upadhyaya","doi":"10.1109/Confluence52989.2022.9734171","DOIUrl":null,"url":null,"abstract":"As the need for automation in the IT sector is growing, several fashion companies are employing models that can create appropriate descriptions for product images. This will assist buyers to better understand the goods, resulting in increased sales for the apparel company. For creating the image descriptions, the researchers used a variety of feature extraction approaches, including convolution neural networks with several layers like VGG-16 and VGG-19. Once the image features are extracted using these convolution neural network (CNN) models, processing of text data is done using a recurrent neural network (RNN) that represents the input sequence of text as a fixed length output vector. Finally, both the vector outputs obtained from the digital image and its description are combined to train the image caption generator model. In this work, we put forward a smaller 5 layer convolution neural network (CNN-5) and compared it with transfer learning models like VGG-16 and VGG-19. The experiments were carried out on the Fashion MNIST dataset, which consists 70,000 gray scale images of size of 28x28 pixels. Each image is linked to one of ten labels (0-9) that represent ten different fashion items. We compared the performance of the proposed methodology as well as the state-of-the-art models using Bilingual Evaluation Understudy: BLEU-I, BLEU-2, BLEU-3 and BLEU-4 scores. The research demonstrates that a smaller layered convolution neural network can reach a similar degree of accuracy for the Fashion MNIST dataset as compared to state-of-the-art methods.","PeriodicalId":261941,"journal":{"name":"2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"A Novel Deep Learning Model for Accurate Prediction of Image Captions in Fashion Industry\",\"authors\":\"Pulkit Dwivedi, Anushka Upadhyaya\",\"doi\":\"10.1109/Confluence52989.2022.9734171\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"As the need for automation in the IT sector is growing, several fashion companies are employing models that can create appropriate descriptions for product images. This will assist buyers to better understand the goods, resulting in increased sales for the apparel company. For creating the image descriptions, the researchers used a variety of feature extraction approaches, including convolution neural networks with several layers like VGG-16 and VGG-19. Once the image features are extracted using these convolution neural network (CNN) models, processing of text data is done using a recurrent neural network (RNN) that represents the input sequence of text as a fixed length output vector. Finally, both the vector outputs obtained from the digital image and its description are combined to train the image caption generator model. In this work, we put forward a smaller 5 layer convolution neural network (CNN-5) and compared it with transfer learning models like VGG-16 and VGG-19. The experiments were carried out on the Fashion MNIST dataset, which consists 70,000 gray scale images of size of 28x28 pixels. Each image is linked to one of ten labels (0-9) that represent ten different fashion items. We compared the performance of the proposed methodology as well as the state-of-the-art models using Bilingual Evaluation Understudy: BLEU-I, BLEU-2, BLEU-3 and BLEU-4 scores. The research demonstrates that a smaller layered convolution neural network can reach a similar degree of accuracy for the Fashion MNIST dataset as compared to state-of-the-art methods.\",\"PeriodicalId\":261941,\"journal\":{\"name\":\"2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence)\",\"volume\":\"85 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-01-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Confluence52989.2022.9734171\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Confluence52989.2022.9734171","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 8

摘要

随着IT行业对自动化的需求不断增长,一些时尚公司正在雇用能够为产品图像创建适当描述的模特。这将有助于买家更好地了解商品,从而增加服装公司的销售额。为了创建图像描述,研究人员使用了多种特征提取方法,包括具有多层的卷积神经网络,如VGG-16和VGG-19。一旦使用卷积神经网络(CNN)模型提取图像特征,文本数据的处理将使用递归神经网络(RNN)完成,该网络将文本的输入序列表示为固定长度的输出向量。最后,结合从数字图像中获得的矢量输出及其描述来训练图像标题生成器模型。在这项工作中,我们提出了一个较小的5层卷积神经网络(CNN-5),并将其与VGG-16和VGG-19等迁移学习模型进行了比较。实验是在Fashion MNIST数据集上进行的,该数据集由7万张28x28像素的灰度图像组成。每张图片都链接到10个标签(0-9)中的一个,代表10种不同的时尚单品。我们使用双语评估替代研究:bleu - 1、BLEU-2、BLEU-3和BLEU-4分数比较了所提出方法的性能以及最先进的模型。研究表明,与最先进的方法相比,更小的分层卷积神经网络可以达到相似程度的Fashion MNIST数据集的准确性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
A Novel Deep Learning Model for Accurate Prediction of Image Captions in Fashion Industry
As the need for automation in the IT sector is growing, several fashion companies are employing models that can create appropriate descriptions for product images. This will assist buyers to better understand the goods, resulting in increased sales for the apparel company. For creating the image descriptions, the researchers used a variety of feature extraction approaches, including convolution neural networks with several layers like VGG-16 and VGG-19. Once the image features are extracted using these convolution neural network (CNN) models, processing of text data is done using a recurrent neural network (RNN) that represents the input sequence of text as a fixed length output vector. Finally, both the vector outputs obtained from the digital image and its description are combined to train the image caption generator model. In this work, we put forward a smaller 5 layer convolution neural network (CNN-5) and compared it with transfer learning models like VGG-16 and VGG-19. The experiments were carried out on the Fashion MNIST dataset, which consists 70,000 gray scale images of size of 28x28 pixels. Each image is linked to one of ten labels (0-9) that represent ten different fashion items. We compared the performance of the proposed methodology as well as the state-of-the-art models using Bilingual Evaluation Understudy: BLEU-I, BLEU-2, BLEU-3 and BLEU-4 scores. The research demonstrates that a smaller layered convolution neural network can reach a similar degree of accuracy for the Fashion MNIST dataset as compared to state-of-the-art methods.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信