{"title":"利用条件独立性进行聚类分析","authors":"T. Szántai, E. Kovács","doi":"10.1109/SACI.2013.6608986","DOIUrl":null,"url":null,"abstract":"In this paper we introduce an unsupervised learning algorithm for discovering some of the conditional independences between the attributes (features) which characterize the elements of a statistical population. Using this algorithm we obtain a graph structure which makes possible the clustering of data elements into classes in an efficient way. In the same time our algorithm gives a new method for reducing the dimension of the feature space. In this way also the visualization of the clusters becomes possible in lower dimensional cases. The results of this type of clustering can be used also for classification of new data elements. We show how the method works on real problems and compare our results to those of other algorithms which are applied to the same dataset.","PeriodicalId":304729,"journal":{"name":"2013 IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Cluster analysis by exploiting conditional independences\",\"authors\":\"T. Szántai, E. Kovács\",\"doi\":\"10.1109/SACI.2013.6608986\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this paper we introduce an unsupervised learning algorithm for discovering some of the conditional independences between the attributes (features) which characterize the elements of a statistical population. Using this algorithm we obtain a graph structure which makes possible the clustering of data elements into classes in an efficient way. In the same time our algorithm gives a new method for reducing the dimension of the feature space. In this way also the visualization of the clusters becomes possible in lower dimensional cases. The results of this type of clustering can be used also for classification of new data elements. We show how the method works on real problems and compare our results to those of other algorithms which are applied to the same dataset.\",\"PeriodicalId\":304729,\"journal\":{\"name\":\"2013 IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI)\",\"volume\":\"31 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SACI.2013.6608986\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE 8th International Symposium on Applied Computational Intelligence and Informatics (SACI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SACI.2013.6608986","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Cluster analysis by exploiting conditional independences
In this paper we introduce an unsupervised learning algorithm for discovering some of the conditional independences between the attributes (features) which characterize the elements of a statistical population. Using this algorithm we obtain a graph structure which makes possible the clustering of data elements into classes in an efficient way. In the same time our algorithm gives a new method for reducing the dimension of the feature space. In this way also the visualization of the clusters becomes possible in lower dimensional cases. The results of this type of clustering can be used also for classification of new data elements. We show how the method works on real problems and compare our results to those of other algorithms which are applied to the same dataset.