Cairong Yan, Anan Ding, Yanting Zhang, Zijian Wang
{"title":"基于层次属性嵌入的时尚相似度学习","authors":"Cairong Yan, Anan Ding, Yanting Zhang, Zijian Wang","doi":"10.1109/DSAA53316.2021.9564236","DOIUrl":null,"url":null,"abstract":"Embedding items directly into a common feature space, and then measuring the similarity by calculating the feature distance in this space, has become the main method for similarity learning in current fashion retrieval tasks. The method is simple and efficient, but it ignores the correlation among fashion attributes and the impact of these correlations on the feature space, thereby reducing the accuracy of retrieval. Since the number of fashion attributes is large and the semantic granularity is also different, how to capture the relationship between fashion attributes and perform refined embedding to accurately represent fashion items is a challenge. In this paper, by constructing an attribute tree, we propose a hierarchical attribute embedding method for representing fashion items to enhance the relationship between attributes and use masking technology to disentangle different attributes. Based on these modules, we propose a hierarchical attribute-aware embedding network (HAEN) which takes images and attributes as input, learns multiple attribute-specific embedding spaces, and measures fine-grained similarity in the corresponding spaces. The extensive experimental result on two fashion-related public datasets FashionAI and DARN shows the superiority (+5.11% and +3.09% in MAP, respectively) of our proposed HAEN compared with state-of-the-art methods.","PeriodicalId":129612,"journal":{"name":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Learning Fashion Similarity Based on Hierarchical Attribute Embedding\",\"authors\":\"Cairong Yan, Anan Ding, Yanting Zhang, Zijian Wang\",\"doi\":\"10.1109/DSAA53316.2021.9564236\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Embedding items directly into a common feature space, and then measuring the similarity by calculating the feature distance in this space, has become the main method for similarity learning in current fashion retrieval tasks. The method is simple and efficient, but it ignores the correlation among fashion attributes and the impact of these correlations on the feature space, thereby reducing the accuracy of retrieval. Since the number of fashion attributes is large and the semantic granularity is also different, how to capture the relationship between fashion attributes and perform refined embedding to accurately represent fashion items is a challenge. In this paper, by constructing an attribute tree, we propose a hierarchical attribute embedding method for representing fashion items to enhance the relationship between attributes and use masking technology to disentangle different attributes. Based on these modules, we propose a hierarchical attribute-aware embedding network (HAEN) which takes images and attributes as input, learns multiple attribute-specific embedding spaces, and measures fine-grained similarity in the corresponding spaces. The extensive experimental result on two fashion-related public datasets FashionAI and DARN shows the superiority (+5.11% and +3.09% in MAP, respectively) of our proposed HAEN compared with state-of-the-art methods.\",\"PeriodicalId\":129612,\"journal\":{\"name\":\"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)\",\"volume\":\"58 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-10-06\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DSAA53316.2021.9564236\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 8th International Conference on Data Science and Advanced Analytics (DSAA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DSAA53316.2021.9564236","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Learning Fashion Similarity Based on Hierarchical Attribute Embedding
Embedding items directly into a common feature space, and then measuring the similarity by calculating the feature distance in this space, has become the main method for similarity learning in current fashion retrieval tasks. The method is simple and efficient, but it ignores the correlation among fashion attributes and the impact of these correlations on the feature space, thereby reducing the accuracy of retrieval. Since the number of fashion attributes is large and the semantic granularity is also different, how to capture the relationship between fashion attributes and perform refined embedding to accurately represent fashion items is a challenge. In this paper, by constructing an attribute tree, we propose a hierarchical attribute embedding method for representing fashion items to enhance the relationship between attributes and use masking technology to disentangle different attributes. Based on these modules, we propose a hierarchical attribute-aware embedding network (HAEN) which takes images and attributes as input, learns multiple attribute-specific embedding spaces, and measures fine-grained similarity in the corresponding spaces. The extensive experimental result on two fashion-related public datasets FashionAI and DARN shows the superiority (+5.11% and +3.09% in MAP, respectively) of our proposed HAEN compared with state-of-the-art methods.