{"title":"模拟元组类分配的子空间分类方法","authors":"Victor Robila, R. Haralick","doi":"10.1109/LISAT58403.2023.10179509","DOIUrl":null,"url":null,"abstract":"We developed a subspace classifier for measurement classification to provide an alternative to current deep learning approaches. Many modern neural networks cannot provide an understandable explanation for their classification. The subspace classifier provides a decomposition of the classification problem making it computationally simpler. We first use a Bayesian method in which all the class conditional probabilities for the entire measurement space can be stored in memory.Then we made experiments with simulated class conditional distributions and defined a subspace classifier that only stores the class conditional probabilities for the subspaces. This can use much larger distributions than the previous model as it uses much less memory so we expanded to cases where the measurement space is generated sequentially and everything does not have to be in the memory at the same time.For cases with distributions that fit in the memory we also compared a Bayesian approach with the subspace approach. The Bayesian subspace classifiers consistently outperformed the subspace classifiers without Bayes rule by a large margin. We also compare the subspace classifier with 3 Python Machine Learning Models, namely a Ridge Classifier, a Multi-Layer Perceptron (MLP) classifier (neural network), and a Support Vector Machine (SVM) on a set of tuples and class conditional probability distributions with 4 classes. The subspace classifier had an average probability of correct identification of 0.25172, the SVM model had an average accuracy of 0.20987, the neural network MLP classifier had an average accuracy of 0.2140 and the Ridge Classifier had an average accuracy of 0.2798 over 10,000 trials.","PeriodicalId":250536,"journal":{"name":"2023 IEEE Long Island Systems, Applications and Technology Conference (LISAT)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Subspace-Classification Approach for Simulated Tuple Class Assignment\",\"authors\":\"Victor Robila, R. Haralick\",\"doi\":\"10.1109/LISAT58403.2023.10179509\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We developed a subspace classifier for measurement classification to provide an alternative to current deep learning approaches. Many modern neural networks cannot provide an understandable explanation for their classification. The subspace classifier provides a decomposition of the classification problem making it computationally simpler. We first use a Bayesian method in which all the class conditional probabilities for the entire measurement space can be stored in memory.Then we made experiments with simulated class conditional distributions and defined a subspace classifier that only stores the class conditional probabilities for the subspaces. This can use much larger distributions than the previous model as it uses much less memory so we expanded to cases where the measurement space is generated sequentially and everything does not have to be in the memory at the same time.For cases with distributions that fit in the memory we also compared a Bayesian approach with the subspace approach. The Bayesian subspace classifiers consistently outperformed the subspace classifiers without Bayes rule by a large margin. We also compare the subspace classifier with 3 Python Machine Learning Models, namely a Ridge Classifier, a Multi-Layer Perceptron (MLP) classifier (neural network), and a Support Vector Machine (SVM) on a set of tuples and class conditional probability distributions with 4 classes. The subspace classifier had an average probability of correct identification of 0.25172, the SVM model had an average accuracy of 0.20987, the neural network MLP classifier had an average accuracy of 0.2140 and the Ridge Classifier had an average accuracy of 0.2798 over 10,000 trials.\",\"PeriodicalId\":250536,\"journal\":{\"name\":\"2023 IEEE Long Island Systems, Applications and Technology Conference (LISAT)\",\"volume\":\"6 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Long Island Systems, Applications and Technology Conference (LISAT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/LISAT58403.2023.10179509\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Long Island Systems, Applications and Technology Conference (LISAT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/LISAT58403.2023.10179509","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Subspace-Classification Approach for Simulated Tuple Class Assignment
We developed a subspace classifier for measurement classification to provide an alternative to current deep learning approaches. Many modern neural networks cannot provide an understandable explanation for their classification. The subspace classifier provides a decomposition of the classification problem making it computationally simpler. We first use a Bayesian method in which all the class conditional probabilities for the entire measurement space can be stored in memory.Then we made experiments with simulated class conditional distributions and defined a subspace classifier that only stores the class conditional probabilities for the subspaces. This can use much larger distributions than the previous model as it uses much less memory so we expanded to cases where the measurement space is generated sequentially and everything does not have to be in the memory at the same time.For cases with distributions that fit in the memory we also compared a Bayesian approach with the subspace approach. The Bayesian subspace classifiers consistently outperformed the subspace classifiers without Bayes rule by a large margin. We also compare the subspace classifier with 3 Python Machine Learning Models, namely a Ridge Classifier, a Multi-Layer Perceptron (MLP) classifier (neural network), and a Support Vector Machine (SVM) on a set of tuples and class conditional probability distributions with 4 classes. The subspace classifier had an average probability of correct identification of 0.25172, the SVM model had an average accuracy of 0.20987, the neural network MLP classifier had an average accuracy of 0.2140 and the Ridge Classifier had an average accuracy of 0.2798 over 10,000 trials.