{"title":"基于流形学习的隐式核算法提高分类精度","authors":"Yuexian Hou, Jingyi Wu, Pilian He","doi":"10.1109/IJCNN.2006.246850","DOIUrl":null,"url":null,"abstract":"Recently several algorithms, e.g., Isomap, self-organizing isometric embedding (SIE), Locally linear embedding (LLE) and Laplacian eigenmap, were proposed to deal with the problem of learning low dimensional nonlinear manifold embedded in a high dimensional space. Motivated by these algorithms, there is a trend of exploiting the intrinsic manifold structure of the data to improve precision and/or efficiency of classification under the assumption that the high dimensional observable data resides on a low dimensional manifold of latten variables. But these methods suffer their flaws respectively. In this work, we unified the problems of supervised manifold learning in a kernel view and proposed a novel implicit kernel construction method, i. e. supervised locally principal direction preservation kernel (SLPDK) construction, to combine the advantages of current implicit kernel construction methods motivated by manifold learning and try to overcome their disadvantages. SLPDK uses class information and locally principal direction of manifold to implement an approximately symmetric embedding. Implicit kernels constructed by SLPDK have a natural geometrical explanation and can gain a considerable classification precision improvement when the condition of locally linear manifold separability (LLMS) holds.","PeriodicalId":134599,"journal":{"name":"IEEE International Joint Conference on Neural Network","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Improving classification precision by implicit kernels motivated by manifold learning\",\"authors\":\"Yuexian Hou, Jingyi Wu, Pilian He\",\"doi\":\"10.1109/IJCNN.2006.246850\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Recently several algorithms, e.g., Isomap, self-organizing isometric embedding (SIE), Locally linear embedding (LLE) and Laplacian eigenmap, were proposed to deal with the problem of learning low dimensional nonlinear manifold embedded in a high dimensional space. Motivated by these algorithms, there is a trend of exploiting the intrinsic manifold structure of the data to improve precision and/or efficiency of classification under the assumption that the high dimensional observable data resides on a low dimensional manifold of latten variables. But these methods suffer their flaws respectively. In this work, we unified the problems of supervised manifold learning in a kernel view and proposed a novel implicit kernel construction method, i. e. supervised locally principal direction preservation kernel (SLPDK) construction, to combine the advantages of current implicit kernel construction methods motivated by manifold learning and try to overcome their disadvantages. SLPDK uses class information and locally principal direction of manifold to implement an approximately symmetric embedding. Implicit kernels constructed by SLPDK have a natural geometrical explanation and can gain a considerable classification precision improvement when the condition of locally linear manifold separability (LLMS) holds.\",\"PeriodicalId\":134599,\"journal\":{\"name\":\"IEEE International Joint Conference on Neural Network\",\"volume\":\"12 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1900-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE International Joint Conference on Neural Network\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IJCNN.2006.246850\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE International Joint Conference on Neural Network","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IJCNN.2006.246850","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Improving classification precision by implicit kernels motivated by manifold learning
Recently several algorithms, e.g., Isomap, self-organizing isometric embedding (SIE), Locally linear embedding (LLE) and Laplacian eigenmap, were proposed to deal with the problem of learning low dimensional nonlinear manifold embedded in a high dimensional space. Motivated by these algorithms, there is a trend of exploiting the intrinsic manifold structure of the data to improve precision and/or efficiency of classification under the assumption that the high dimensional observable data resides on a low dimensional manifold of latten variables. But these methods suffer their flaws respectively. In this work, we unified the problems of supervised manifold learning in a kernel view and proposed a novel implicit kernel construction method, i. e. supervised locally principal direction preservation kernel (SLPDK) construction, to combine the advantages of current implicit kernel construction methods motivated by manifold learning and try to overcome their disadvantages. SLPDK uses class information and locally principal direction of manifold to implement an approximately symmetric embedding. Implicit kernels constructed by SLPDK have a natural geometrical explanation and can gain a considerable classification precision improvement when the condition of locally linear manifold separability (LLMS) holds.