Ren Guan , Yifei Wang , Xinyuan Liu , Bin Chen , Jihua Zhu
{"title":"双对比标签增强功能","authors":"Ren Guan , Yifei Wang , Xinyuan Liu , Bin Chen , Jihua Zhu","doi":"10.1016/j.patcog.2024.111183","DOIUrl":null,"url":null,"abstract":"<div><div>Label Enhancement (LE) strives to convert logical labels of instances into label distributions to provide data preparation for label distribution learning (LDL). Existing LE methods ordinarily neglect to consider original features and logical labels as two complementary descriptive views of instances for extracting implicit related information across views, resulting in insufficient utilization of the feature and logical label information of the instances. To address this issue, we propose a novel method named Dual Contrastive Label Enhancement (DCLE). This method regards original features and logical labels as two view-specific descriptions and encodes them into a unified projection space. We employ dual contrastive learning strategy at both instance-level and class-level to excavate cross-view consensus information and distinguish instance representations by exploring inherent correlations among features, thereby generating high-level representations of the instances. Subsequently, to recover label distributions from obtained high-level representations, we design a distance-minimized and margin-penalized training strategy and preserve the consistency of label attributes. Extensive experiments conducted on 13 benchmark datasets of LDL validate the efficacy and competitiveness of DCLE.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111183"},"PeriodicalIF":7.5000,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Dual Contrastive Label Enhancement\",\"authors\":\"Ren Guan , Yifei Wang , Xinyuan Liu , Bin Chen , Jihua Zhu\",\"doi\":\"10.1016/j.patcog.2024.111183\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Label Enhancement (LE) strives to convert logical labels of instances into label distributions to provide data preparation for label distribution learning (LDL). Existing LE methods ordinarily neglect to consider original features and logical labels as two complementary descriptive views of instances for extracting implicit related information across views, resulting in insufficient utilization of the feature and logical label information of the instances. To address this issue, we propose a novel method named Dual Contrastive Label Enhancement (DCLE). This method regards original features and logical labels as two view-specific descriptions and encodes them into a unified projection space. We employ dual contrastive learning strategy at both instance-level and class-level to excavate cross-view consensus information and distinguish instance representations by exploring inherent correlations among features, thereby generating high-level representations of the instances. Subsequently, to recover label distributions from obtained high-level representations, we design a distance-minimized and margin-penalized training strategy and preserve the consistency of label attributes. Extensive experiments conducted on 13 benchmark datasets of LDL validate the efficacy and competitiveness of DCLE.</div></div>\",\"PeriodicalId\":49713,\"journal\":{\"name\":\"Pattern Recognition\",\"volume\":\"160 \",\"pages\":\"Article 111183\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2024-11-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0031320324009348\",\"RegionNum\":1,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0031320324009348","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Label Enhancement (LE) strives to convert logical labels of instances into label distributions to provide data preparation for label distribution learning (LDL). Existing LE methods ordinarily neglect to consider original features and logical labels as two complementary descriptive views of instances for extracting implicit related information across views, resulting in insufficient utilization of the feature and logical label information of the instances. To address this issue, we propose a novel method named Dual Contrastive Label Enhancement (DCLE). This method regards original features and logical labels as two view-specific descriptions and encodes them into a unified projection space. We employ dual contrastive learning strategy at both instance-level and class-level to excavate cross-view consensus information and distinguish instance representations by exploring inherent correlations among features, thereby generating high-level representations of the instances. Subsequently, to recover label distributions from obtained high-level representations, we design a distance-minimized and margin-penalized training strategy and preserve the consistency of label attributes. Extensive experiments conducted on 13 benchmark datasets of LDL validate the efficacy and competitiveness of DCLE.
期刊介绍:
The field of Pattern Recognition is both mature and rapidly evolving, playing a crucial role in various related fields such as computer vision, image processing, text analysis, and neural networks. It closely intersects with machine learning and is being applied in emerging areas like biometrics, bioinformatics, multimedia data analysis, and data science. The journal Pattern Recognition, established half a century ago during the early days of computer science, has since grown significantly in scope and influence.