Ling Ma , Chuhang Zou , Ziyi Guo , Tao Li , Zheng Liu , Fengyuan Zou
{"title":"有序失焦:一种相对时尚的排序学习方法","authors":"Ling Ma , Chuhang Zou , Ziyi Guo , Tao Li , Zheng Liu , Fengyuan Zou","doi":"10.1016/j.ipm.2025.104205","DOIUrl":null,"url":null,"abstract":"<div><div>Existing fashion recommendations often rely on natural language processing or content-based image retrieval, overlooking direct aesthetic assessments of fashion images. Given the subjectivity and complexity of this task, we propose treating fashionability as a relative attribute to rank paired clothing images. To address this ranking challenge, we propose Ordinal Focal Loss, which transforms the pairwise ranking problem into a multi-classification task, leveraging ordinal attributes to improve classification boundaries. Furthermore, in terms of fashion feature representation, we propose modeling not just individual items but also their combined effect as an outfit, providing a more holistic and nuanced fashion representation. We introduce the Fashionability3k dataset, comprising 3k image pairs (2398 ordered and 601 similar pairs) with objective relative fashion labels. Experiments on three datasets—our Fashionability3k and two public datasets—show that our method outperforms the baseline by nearly 1 % in ranking accuracy. In the user study, it achieved a 0.72 consistency with human subjective perception. Moreover, combining local and global visual features leads to additional performance gains, with an average improvement of 2.78 % in ordered pairs and 1.09 % in similar pairs. This is the first study to treat fashionability as an objective attribute for comparative analysis, validated through extensive experiments.</div></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":"62 5","pages":"Article 104205"},"PeriodicalIF":7.4000,"publicationDate":"2025-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Ordinal focal loss: A relative fashionability ranking learning method\",\"authors\":\"Ling Ma , Chuhang Zou , Ziyi Guo , Tao Li , Zheng Liu , Fengyuan Zou\",\"doi\":\"10.1016/j.ipm.2025.104205\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Existing fashion recommendations often rely on natural language processing or content-based image retrieval, overlooking direct aesthetic assessments of fashion images. Given the subjectivity and complexity of this task, we propose treating fashionability as a relative attribute to rank paired clothing images. To address this ranking challenge, we propose Ordinal Focal Loss, which transforms the pairwise ranking problem into a multi-classification task, leveraging ordinal attributes to improve classification boundaries. Furthermore, in terms of fashion feature representation, we propose modeling not just individual items but also their combined effect as an outfit, providing a more holistic and nuanced fashion representation. We introduce the Fashionability3k dataset, comprising 3k image pairs (2398 ordered and 601 similar pairs) with objective relative fashion labels. Experiments on three datasets—our Fashionability3k and two public datasets—show that our method outperforms the baseline by nearly 1 % in ranking accuracy. In the user study, it achieved a 0.72 consistency with human subjective perception. Moreover, combining local and global visual features leads to additional performance gains, with an average improvement of 2.78 % in ordered pairs and 1.09 % in similar pairs. This is the first study to treat fashionability as an objective attribute for comparative analysis, validated through extensive experiments.</div></div>\",\"PeriodicalId\":50365,\"journal\":{\"name\":\"Information Processing & Management\",\"volume\":\"62 5\",\"pages\":\"Article 104205\"},\"PeriodicalIF\":7.4000,\"publicationDate\":\"2025-05-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Processing & Management\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0306457325001463\",\"RegionNum\":1,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Processing & Management","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0306457325001463","RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Ordinal focal loss: A relative fashionability ranking learning method
Existing fashion recommendations often rely on natural language processing or content-based image retrieval, overlooking direct aesthetic assessments of fashion images. Given the subjectivity and complexity of this task, we propose treating fashionability as a relative attribute to rank paired clothing images. To address this ranking challenge, we propose Ordinal Focal Loss, which transforms the pairwise ranking problem into a multi-classification task, leveraging ordinal attributes to improve classification boundaries. Furthermore, in terms of fashion feature representation, we propose modeling not just individual items but also their combined effect as an outfit, providing a more holistic and nuanced fashion representation. We introduce the Fashionability3k dataset, comprising 3k image pairs (2398 ordered and 601 similar pairs) with objective relative fashion labels. Experiments on three datasets—our Fashionability3k and two public datasets—show that our method outperforms the baseline by nearly 1 % in ranking accuracy. In the user study, it achieved a 0.72 consistency with human subjective perception. Moreover, combining local and global visual features leads to additional performance gains, with an average improvement of 2.78 % in ordered pairs and 1.09 % in similar pairs. This is the first study to treat fashionability as an objective attribute for comparative analysis, validated through extensive experiments.
期刊介绍:
Information Processing and Management is dedicated to publishing cutting-edge original research at the convergence of computing and information science. Our scope encompasses theory, methods, and applications across various domains, including advertising, business, health, information science, information technology marketing, and social computing.
We aim to cater to the interests of both primary researchers and practitioners by offering an effective platform for the timely dissemination of advanced and topical issues in this interdisciplinary field. The journal places particular emphasis on original research articles, research survey articles, research method articles, and articles addressing critical applications of research. Join us in advancing knowledge and innovation at the intersection of computing and information science.