{"title":"基于深度超向量的聚类表示","authors":"Amir Namavar Jahromi, S. Hashemi","doi":"10.1109/IKT.2017.8258629","DOIUrl":null,"url":null,"abstract":"Deep learning is a powerful method that has achieved huge success in numerous supervised applications. Nonetheless, the performance of deep learning in unsupervised learning has been far from being explored. Inspired by this observation, we proposed a deep autoencoder based representation learning for clustering. For this purpose, we devised a stacked autoencoder without the fine-tuning step and concatenated all layers' representations together to make a super vector that is a more powerful representation for clustering. We use k-means as our clustering algorithm and compare its results with the k-means clustering results on original representations. Our method was evaluated on six datasets and the results are promising.","PeriodicalId":338914,"journal":{"name":"2017 9th International Conference on Information and Knowledge Technology (IKT)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A deep super-vector based representation for clustering\",\"authors\":\"Amir Namavar Jahromi, S. Hashemi\",\"doi\":\"10.1109/IKT.2017.8258629\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Deep learning is a powerful method that has achieved huge success in numerous supervised applications. Nonetheless, the performance of deep learning in unsupervised learning has been far from being explored. Inspired by this observation, we proposed a deep autoencoder based representation learning for clustering. For this purpose, we devised a stacked autoencoder without the fine-tuning step and concatenated all layers' representations together to make a super vector that is a more powerful representation for clustering. We use k-means as our clustering algorithm and compare its results with the k-means clustering results on original representations. Our method was evaluated on six datasets and the results are promising.\",\"PeriodicalId\":338914,\"journal\":{\"name\":\"2017 9th International Conference on Information and Knowledge Technology (IKT)\",\"volume\":\"51 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2017 9th International Conference on Information and Knowledge Technology (IKT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IKT.2017.8258629\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2017 9th International Conference on Information and Knowledge Technology (IKT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IKT.2017.8258629","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A deep super-vector based representation for clustering
Deep learning is a powerful method that has achieved huge success in numerous supervised applications. Nonetheless, the performance of deep learning in unsupervised learning has been far from being explored. Inspired by this observation, we proposed a deep autoencoder based representation learning for clustering. For this purpose, we devised a stacked autoencoder without the fine-tuning step and concatenated all layers' representations together to make a super vector that is a more powerful representation for clustering. We use k-means as our clustering algorithm and compare its results with the k-means clustering results on original representations. Our method was evaluated on six datasets and the results are promising.