{"title":"稀疏训练数据下基于特征的纹理分类器与卷积神经网络的性能比较","authors":"Ryan Dellana, K. Roy","doi":"10.1109/SECON.2017.7925325","DOIUrl":null,"url":null,"abstract":"In this work, we compare the performance of three local-feature-based texture classifiers and a Convolutional Neural Network (CNN) at face recognition with sparse training data. The texture-based classifiers use Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP), and Scale Invariant Feature Transform (SIFT), respectively. The CNN uses six convolutional layers, two pooling layers, two fully connected layers, and outputs a softmax probability distribution over the classes. The dataset contains 100 classes with five samples each, and is partitioned so there is only one training sample per class. Under these conditions, we find that all three feature-based approaches significantly outperform the CNN, with the HOG-based approach showing especially strong performance.","PeriodicalId":368197,"journal":{"name":"SoutheastCon 2017","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2017-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Performance of feature-based texture classifiers versus Convolutional Neural Network under sparse training data\",\"authors\":\"Ryan Dellana, K. Roy\",\"doi\":\"10.1109/SECON.2017.7925325\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this work, we compare the performance of three local-feature-based texture classifiers and a Convolutional Neural Network (CNN) at face recognition with sparse training data. The texture-based classifiers use Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP), and Scale Invariant Feature Transform (SIFT), respectively. The CNN uses six convolutional layers, two pooling layers, two fully connected layers, and outputs a softmax probability distribution over the classes. The dataset contains 100 classes with five samples each, and is partitioned so there is only one training sample per class. Under these conditions, we find that all three feature-based approaches significantly outperform the CNN, with the HOG-based approach showing especially strong performance.\",\"PeriodicalId\":368197,\"journal\":{\"name\":\"SoutheastCon 2017\",\"volume\":\"76 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"SoutheastCon 2017\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SECON.2017.7925325\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"SoutheastCon 2017","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SECON.2017.7925325","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Performance of feature-based texture classifiers versus Convolutional Neural Network under sparse training data
In this work, we compare the performance of three local-feature-based texture classifiers and a Convolutional Neural Network (CNN) at face recognition with sparse training data. The texture-based classifiers use Histogram of Oriented Gradients (HOG), Local Binary Patterns (LBP), and Scale Invariant Feature Transform (SIFT), respectively. The CNN uses six convolutional layers, two pooling layers, two fully connected layers, and outputs a softmax probability distribution over the classes. The dataset contains 100 classes with five samples each, and is partitioned so there is only one training sample per class. Under these conditions, we find that all three feature-based approaches significantly outperform the CNN, with the HOG-based approach showing especially strong performance.