{"title":"模式识别的新核与核主成分分析","authors":"J. Isaacs, S. Foo, A. Meyer-Bäse","doi":"10.1109/CIRA.2007.382927","DOIUrl":null,"url":null,"abstract":"Kernel methods are a mathematical tool that provides a generally higher dimensional representation of given data set in feature space for feature recognition and image analysis problems. Typically, the kernel trick is thought of as a method for converting a linear classification learning algorithm into non-linear one, by mapping the original observations into a higher-dimensional non-linear space so that linear classification in the new space is equivalent to non-linear classification in the original space. Moreover, optimal kernels can be designed to capture the natural variation present in the data. In this paper we present the performance results of fifteen novel kernel functions and their respective performance for kernel principal component analysis on five select databases. Empirical results show that our kernels perform as well and better than existing kernels on these databases.","PeriodicalId":301626,"journal":{"name":"2007 International Symposium on Computational Intelligence in Robotics and Automation","volume":"107 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"11","resultStr":"{\"title\":\"Novel Kernels and Kernel PCA for Pattern Recognition\",\"authors\":\"J. Isaacs, S. Foo, A. Meyer-Bäse\",\"doi\":\"10.1109/CIRA.2007.382927\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Kernel methods are a mathematical tool that provides a generally higher dimensional representation of given data set in feature space for feature recognition and image analysis problems. Typically, the kernel trick is thought of as a method for converting a linear classification learning algorithm into non-linear one, by mapping the original observations into a higher-dimensional non-linear space so that linear classification in the new space is equivalent to non-linear classification in the original space. Moreover, optimal kernels can be designed to capture the natural variation present in the data. In this paper we present the performance results of fifteen novel kernel functions and their respective performance for kernel principal component analysis on five select databases. Empirical results show that our kernels perform as well and better than existing kernels on these databases.\",\"PeriodicalId\":301626,\"journal\":{\"name\":\"2007 International Symposium on Computational Intelligence in Robotics and Automation\",\"volume\":\"107 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"11\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 International Symposium on Computational Intelligence in Robotics and Automation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIRA.2007.382927\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 International Symposium on Computational Intelligence in Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIRA.2007.382927","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Novel Kernels and Kernel PCA for Pattern Recognition
Kernel methods are a mathematical tool that provides a generally higher dimensional representation of given data set in feature space for feature recognition and image analysis problems. Typically, the kernel trick is thought of as a method for converting a linear classification learning algorithm into non-linear one, by mapping the original observations into a higher-dimensional non-linear space so that linear classification in the new space is equivalent to non-linear classification in the original space. Moreover, optimal kernels can be designed to capture the natural variation present in the data. In this paper we present the performance results of fifteen novel kernel functions and their respective performance for kernel principal component analysis on five select databases. Empirical results show that our kernels perform as well and better than existing kernels on these databases.