{"title":"CNN和传统分类器在手语识别中的表现","authors":"Sobia Fayyaz, Y. Ayaz","doi":"10.1145/3310986.3311011","DOIUrl":null,"url":null,"abstract":"Many people around the world are suffering from vocal and hearing disabilities and they communicate with others by actions rather than speech. They prefer sign language (hand gestures) to convey what revolves in their mind. Every language has some set of rules of grammar to express information in meaningful way but not everyone can recognize what is being conveyed through sign language. Thus the automatic translation of a sign language serves as basic need for overcoming many difficulties and providing convenience for impaired people in the developing era of technology. For many years, a lot of researchers have been working on developing the better algorithm for sign language communication using machine learning and computer vision techniques that passes through many stages such as pre-processing, segmentation, extraction of features and classification. But the efficient features can produce more effective and accurate results. This paper aims at comparing performance of different classifiers to deep convolutional neural network (CNN) on sign language dataset providing with and without local feature descriptor and bag of visual words. This is definitely a classification task for which CNN, Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) are being widely considered. CNN is capable of extracting and representing high-level abstractions in the dataset that results good accuracy but some traditional classifiers are also capable for that when providing with good features. We evaluate the performance of MLP and SVM without and with Speed up Robust Features (SURF) on the same data given to CNN. Results are also discussed in this paper that shows MLP and SVM employing descriptor gives high accuracy than CNN.","PeriodicalId":252781,"journal":{"name":"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"CNN and Traditional Classifiers Performance for Sign Language Recognition\",\"authors\":\"Sobia Fayyaz, Y. Ayaz\",\"doi\":\"10.1145/3310986.3311011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many people around the world are suffering from vocal and hearing disabilities and they communicate with others by actions rather than speech. They prefer sign language (hand gestures) to convey what revolves in their mind. Every language has some set of rules of grammar to express information in meaningful way but not everyone can recognize what is being conveyed through sign language. Thus the automatic translation of a sign language serves as basic need for overcoming many difficulties and providing convenience for impaired people in the developing era of technology. For many years, a lot of researchers have been working on developing the better algorithm for sign language communication using machine learning and computer vision techniques that passes through many stages such as pre-processing, segmentation, extraction of features and classification. But the efficient features can produce more effective and accurate results. This paper aims at comparing performance of different classifiers to deep convolutional neural network (CNN) on sign language dataset providing with and without local feature descriptor and bag of visual words. This is definitely a classification task for which CNN, Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) are being widely considered. CNN is capable of extracting and representing high-level abstractions in the dataset that results good accuracy but some traditional classifiers are also capable for that when providing with good features. We evaluate the performance of MLP and SVM without and with Speed up Robust Features (SURF) on the same data given to CNN. Results are also discussed in this paper that shows MLP and SVM employing descriptor gives high accuracy than CNN.\",\"PeriodicalId\":252781,\"journal\":{\"name\":\"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-01-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3310986.3311011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3310986.3311011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
CNN and Traditional Classifiers Performance for Sign Language Recognition
Many people around the world are suffering from vocal and hearing disabilities and they communicate with others by actions rather than speech. They prefer sign language (hand gestures) to convey what revolves in their mind. Every language has some set of rules of grammar to express information in meaningful way but not everyone can recognize what is being conveyed through sign language. Thus the automatic translation of a sign language serves as basic need for overcoming many difficulties and providing convenience for impaired people in the developing era of technology. For many years, a lot of researchers have been working on developing the better algorithm for sign language communication using machine learning and computer vision techniques that passes through many stages such as pre-processing, segmentation, extraction of features and classification. But the efficient features can produce more effective and accurate results. This paper aims at comparing performance of different classifiers to deep convolutional neural network (CNN) on sign language dataset providing with and without local feature descriptor and bag of visual words. This is definitely a classification task for which CNN, Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) are being widely considered. CNN is capable of extracting and representing high-level abstractions in the dataset that results good accuracy but some traditional classifiers are also capable for that when providing with good features. We evaluate the performance of MLP and SVM without and with Speed up Robust Features (SURF) on the same data given to CNN. Results are also discussed in this paper that shows MLP and SVM employing descriptor gives high accuracy than CNN.