{"title":"随机展开","authors":"Ke Sun, E. Bruno, S. Marchand-Maillet","doi":"10.1109/MLSP.2012.6349713","DOIUrl":null,"url":null,"abstract":"This paper proposes a nonlinear dimensionality reduction technique called Stochastic Unfolding (SU). Similar to Stochastic Neighbour Embedding (SNE), N input signals are first encoded into a N × N matrix of probability distribution(s) for subsequent learning. Unlike SNE, these probabilities are not to be preserved in the embedding, but to be deformed in the way that the embedded signals have less curvature than the original signals. The cost function is based on another type of statistical estimation instead of the commonly-used maximum likelihood estimator. Its gradient presents a Mexican-hat shape with local attraction and remote repulsion, which was used as a heuristic and is theoretically justified in this work. Experimental results compared with the state of art show that SU is good at preserving topology and performs best on datasets with local manifold structures.","PeriodicalId":262601,"journal":{"name":"2012 IEEE International Workshop on Machine Learning for Signal Processing","volume":"258 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Stochastic unfolding\",\"authors\":\"Ke Sun, E. Bruno, S. Marchand-Maillet\",\"doi\":\"10.1109/MLSP.2012.6349713\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper proposes a nonlinear dimensionality reduction technique called Stochastic Unfolding (SU). Similar to Stochastic Neighbour Embedding (SNE), N input signals are first encoded into a N × N matrix of probability distribution(s) for subsequent learning. Unlike SNE, these probabilities are not to be preserved in the embedding, but to be deformed in the way that the embedded signals have less curvature than the original signals. The cost function is based on another type of statistical estimation instead of the commonly-used maximum likelihood estimator. Its gradient presents a Mexican-hat shape with local attraction and remote repulsion, which was used as a heuristic and is theoretically justified in this work. Experimental results compared with the state of art show that SU is good at preserving topology and performs best on datasets with local manifold structures.\",\"PeriodicalId\":262601,\"journal\":{\"name\":\"2012 IEEE International Workshop on Machine Learning for Signal Processing\",\"volume\":\"258 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2012-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2012 IEEE International Workshop on Machine Learning for Signal Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MLSP.2012.6349713\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 IEEE International Workshop on Machine Learning for Signal Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MLSP.2012.6349713","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper proposes a nonlinear dimensionality reduction technique called Stochastic Unfolding (SU). Similar to Stochastic Neighbour Embedding (SNE), N input signals are first encoded into a N × N matrix of probability distribution(s) for subsequent learning. Unlike SNE, these probabilities are not to be preserved in the embedding, but to be deformed in the way that the embedded signals have less curvature than the original signals. The cost function is based on another type of statistical estimation instead of the commonly-used maximum likelihood estimator. Its gradient presents a Mexican-hat shape with local attraction and remote repulsion, which was used as a heuristic and is theoretically justified in this work. Experimental results compared with the state of art show that SU is good at preserving topology and performs best on datasets with local manifold structures.