Shijie Zhao , Liang Cai , Fanshuai Meng , RongHua Yang
{"title":"Discriminative distance metric learning via class-center guidance","authors":"Shijie Zhao , Liang Cai , Fanshuai Meng , RongHua Yang","doi":"10.1016/j.asoc.2025.113552","DOIUrl":null,"url":null,"abstract":"<div><div>Distance metric learning is a technique of great importance to machine learning and data processing, which can effectively improve the generalization performance of algorithms related to distance metrics. The method projects the original data to the metric space through a transformation to realize the automatic adjustment of the distance between samples, so as to achieve the increase of the between-class distance and the decrease of the within-class distance. To better achieve this goal, we propose a discriminative distance metric learning via class-center guidance (DML-CG). The proposed DML-CG learns a novel discriminative distance metric by maximizing the trace ratio of between-class covariance to within-class covariance, and at the same time transforms the trace ratio problem into a ratio-trace problem to find the global optimal solution. In addition, this method selects <em>k</em> nearest neighbors for each training sample to generate sample pairs, and jointly uses local metrics learned from multiple class-center guidance and a global metric to guide samples of the same class closer to the class center, and samples of different class farther away from the sample class center. This achieves both the distance metric and captures the discriminative structure of the data. Meanwhile, global regularization is introduced to improve the generalization performance and control overfitting. We design an alternating iteration algorithm to optimally solve the proposed method and theoretically analyze the convergence and complexity. Finally, the effectiveness of the proposed algorithm is demonstrated on structured artificial datasets and UCI datasets as well as unstructured image recognition datasets. Most of the results show that the proposed algorithm outperforms other state-of-the-art distance metric learning methods.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"182 ","pages":"Article 113552"},"PeriodicalIF":7.2000,"publicationDate":"2025-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Soft Computing","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1568494625008634","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Distance metric learning is a technique of great importance to machine learning and data processing, which can effectively improve the generalization performance of algorithms related to distance metrics. The method projects the original data to the metric space through a transformation to realize the automatic adjustment of the distance between samples, so as to achieve the increase of the between-class distance and the decrease of the within-class distance. To better achieve this goal, we propose a discriminative distance metric learning via class-center guidance (DML-CG). The proposed DML-CG learns a novel discriminative distance metric by maximizing the trace ratio of between-class covariance to within-class covariance, and at the same time transforms the trace ratio problem into a ratio-trace problem to find the global optimal solution. In addition, this method selects k nearest neighbors for each training sample to generate sample pairs, and jointly uses local metrics learned from multiple class-center guidance and a global metric to guide samples of the same class closer to the class center, and samples of different class farther away from the sample class center. This achieves both the distance metric and captures the discriminative structure of the data. Meanwhile, global regularization is introduced to improve the generalization performance and control overfitting. We design an alternating iteration algorithm to optimally solve the proposed method and theoretically analyze the convergence and complexity. Finally, the effectiveness of the proposed algorithm is demonstrated on structured artificial datasets and UCI datasets as well as unstructured image recognition datasets. Most of the results show that the proposed algorithm outperforms other state-of-the-art distance metric learning methods.
期刊介绍:
Applied Soft Computing is an international journal promoting an integrated view of soft computing to solve real life problems.The focus is to publish the highest quality research in application and convergence of the areas of Fuzzy Logic, Neural Networks, Evolutionary Computing, Rough Sets and other similar techniques to address real world complexities.
Applied Soft Computing is a rolling publication: articles are published as soon as the editor-in-chief has accepted them. Therefore, the web site will continuously be updated with new articles and the publication time will be short.