A Comparative Study of Object Recognition Techniques: Softmax, Linear and Quadratic Discriminant Analysis Based on Convolutional Neural Network Feature Extraction
Napol Siripibal, S. Supratid, Chaitawatch Sudprasert
{"title":"A Comparative Study of Object Recognition Techniques: Softmax, Linear and Quadratic Discriminant Analysis Based on Convolutional Neural Network Feature Extraction","authors":"Napol Siripibal, S. Supratid, Chaitawatch Sudprasert","doi":"10.1145/3335550.3335584","DOIUrl":null,"url":null,"abstract":"This paper presents a comparison study on using softmax, linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) for object recognition. The least effort is needed for hyper-parameter tuning or selection for all such three classifiers. Convolutional neural network (CNN), using feed-forward-architecture deep learning neural network is employed here for efficient feature extraction and reduction. Then, the extracted, reduced features are fed into the classification comparison. The experiments rely on a small-image CIFAR-10 dataset such that a simple, four convolutional-layer CNN architecture can possibly handle effective feature extraction with hardly over-fitting. Recognition performance evaluations rely on averages of precision, recall, F1 scores and accuracy rates, based on 10-fold cross validation for bias reduction purpose. Such performance measures are implemented under balanced as well as unbalanced --class data, respectively referred to equal and uniform-random-sampling unequal --size class dataset. The results indicate a few bits of recognition performance differences regarding F1 scores as well as accuracy rates among the CNN-LDA, CNN-QDA and CNN-softmax, where the balanced-class and unbalanced-class are separately determined. However, the lowest and the highest of the largest wrong prediction cases are generated by CNN-QDA and CNN-softmax respectively for both balanced and unbalanced-class data.","PeriodicalId":312704,"journal":{"name":"Proceedings of the 2019 International Conference on Management Science and Industrial Engineering - MSIE 2019","volume":"57 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 International Conference on Management Science and Industrial Engineering - MSIE 2019","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3335550.3335584","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
This paper presents a comparison study on using softmax, linear discriminant analysis (LDA) and quadratic discriminant analysis (QDA) for object recognition. The least effort is needed for hyper-parameter tuning or selection for all such three classifiers. Convolutional neural network (CNN), using feed-forward-architecture deep learning neural network is employed here for efficient feature extraction and reduction. Then, the extracted, reduced features are fed into the classification comparison. The experiments rely on a small-image CIFAR-10 dataset such that a simple, four convolutional-layer CNN architecture can possibly handle effective feature extraction with hardly over-fitting. Recognition performance evaluations rely on averages of precision, recall, F1 scores and accuracy rates, based on 10-fold cross validation for bias reduction purpose. Such performance measures are implemented under balanced as well as unbalanced --class data, respectively referred to equal and uniform-random-sampling unequal --size class dataset. The results indicate a few bits of recognition performance differences regarding F1 scores as well as accuracy rates among the CNN-LDA, CNN-QDA and CNN-softmax, where the balanced-class and unbalanced-class are separately determined. However, the lowest and the highest of the largest wrong prediction cases are generated by CNN-QDA and CNN-softmax respectively for both balanced and unbalanced-class data.