Zahra Haghgu, R. Azmi, Lachin Zamani, Fatemeh Moradian
{"title":"OutCLIP, A New Multi-Outfit CLIP Based Triplet Network","authors":"Zahra Haghgu, R. Azmi, Lachin Zamani, Fatemeh Moradian","doi":"10.1109/CSICC58665.2023.10105384","DOIUrl":null,"url":null,"abstract":"Choosing a proper outfit is one of the problems we deal with every day. Today, people tend to use online websites for shopping, and the COVID-19 situation forced this condition more than before. In this research, we proposed a new architecture for multi-fashion item retrieval from a website database. We deployed a CLIP transformer model instead of convolutional neural networks in a triplet network. We also added a long short-term memory network (LSTM) to automatically extract and code the image features to generate descriptive text for each input image. Our OutCLIP model succeeded in doing its task with 83% precision and 85% recall accuracy in multi-item retrieval. This model can be trained and used in fashion retrieval problems and improve the former proposed models. Considering the descriptive text and the image together gives the model a better understanding of the concept and improves its generalization.","PeriodicalId":127277,"journal":{"name":"2023 28th International Computer Conference, Computer Society of Iran (CSICC)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 28th International Computer Conference, Computer Society of Iran (CSICC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CSICC58665.2023.10105384","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Choosing a proper outfit is one of the problems we deal with every day. Today, people tend to use online websites for shopping, and the COVID-19 situation forced this condition more than before. In this research, we proposed a new architecture for multi-fashion item retrieval from a website database. We deployed a CLIP transformer model instead of convolutional neural networks in a triplet network. We also added a long short-term memory network (LSTM) to automatically extract and code the image features to generate descriptive text for each input image. Our OutCLIP model succeeded in doing its task with 83% precision and 85% recall accuracy in multi-item retrieval. This model can be trained and used in fashion retrieval problems and improve the former proposed models. Considering the descriptive text and the image together gives the model a better understanding of the concept and improves its generalization.