{"title":"一种学习马氏度量的随机算法:在生物数据分类和回归中的应用","authors":"C. Langmead","doi":"10.1142/9781860947292_0025","DOIUrl":null,"url":null,"abstract":"We present a randomized algorithm for semi-supervised learning of Mahalanobis metrics over Rn. The inputs to the algorithm are a set, U , of unlabeled points in Rn, a set of pairs of points, S = {(x, y)i};x, y ∈ U , that are known to be similar, and a set of pairs of points, D = {(x, y)i};x, y ∈ U , that are known to be dissimilar. The algorithm randomly samples S, D, and m-dimensional subspaces of Rn and learns a metric for each subspace. The metric over Rn is a linear combination of the subspace metrics. The randomization addresses issues of efficiency and overfitting. Extensions of the algorithm to learning non-linear metrics via kernels, and as a pre-processing step for dimensionality reduction are discussed. The new method is demonstrated on a regression problem (structure-based chemical shift prediction) and a classification problem (predicting clinical outcomes for immunomodulatory strategies for treating severe sepsis).","PeriodicalId":74513,"journal":{"name":"Proceedings of the ... Asia-Pacific bioinformatics conference","volume":"8 1","pages":"217-226"},"PeriodicalIF":0.0000,"publicationDate":"2005-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":"{\"title\":\"A Randomized Algorithm for Learning Mahalanobis Metrics: Application to Classification and Regression of Biological Data\",\"authors\":\"C. Langmead\",\"doi\":\"10.1142/9781860947292_0025\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present a randomized algorithm for semi-supervised learning of Mahalanobis metrics over Rn. The inputs to the algorithm are a set, U , of unlabeled points in Rn, a set of pairs of points, S = {(x, y)i};x, y ∈ U , that are known to be similar, and a set of pairs of points, D = {(x, y)i};x, y ∈ U , that are known to be dissimilar. The algorithm randomly samples S, D, and m-dimensional subspaces of Rn and learns a metric for each subspace. The metric over Rn is a linear combination of the subspace metrics. The randomization addresses issues of efficiency and overfitting. Extensions of the algorithm to learning non-linear metrics via kernels, and as a pre-processing step for dimensionality reduction are discussed. The new method is demonstrated on a regression problem (structure-based chemical shift prediction) and a classification problem (predicting clinical outcomes for immunomodulatory strategies for treating severe sepsis).\",\"PeriodicalId\":74513,\"journal\":{\"name\":\"Proceedings of the ... Asia-Pacific bioinformatics conference\",\"volume\":\"8 1\",\"pages\":\"217-226\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2005-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"5\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... Asia-Pacific bioinformatics conference\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1142/9781860947292_0025\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... Asia-Pacific bioinformatics conference","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/9781860947292_0025","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Randomized Algorithm for Learning Mahalanobis Metrics: Application to Classification and Regression of Biological Data
We present a randomized algorithm for semi-supervised learning of Mahalanobis metrics over Rn. The inputs to the algorithm are a set, U , of unlabeled points in Rn, a set of pairs of points, S = {(x, y)i};x, y ∈ U , that are known to be similar, and a set of pairs of points, D = {(x, y)i};x, y ∈ U , that are known to be dissimilar. The algorithm randomly samples S, D, and m-dimensional subspaces of Rn and learns a metric for each subspace. The metric over Rn is a linear combination of the subspace metrics. The randomization addresses issues of efficiency and overfitting. Extensions of the algorithm to learning non-linear metrics via kernels, and as a pre-processing step for dimensionality reduction are discussed. The new method is demonstrated on a regression problem (structure-based chemical shift prediction) and a classification problem (predicting clinical outcomes for immunomodulatory strategies for treating severe sepsis).