CNN和传统分类器在手语识别中的表现

Sobia Fayyaz, Y. Ayaz
{"title":"CNN和传统分类器在手语识别中的表现","authors":"Sobia Fayyaz, Y. Ayaz","doi":"10.1145/3310986.3311011","DOIUrl":null,"url":null,"abstract":"Many people around the world are suffering from vocal and hearing disabilities and they communicate with others by actions rather than speech. They prefer sign language (hand gestures) to convey what revolves in their mind. Every language has some set of rules of grammar to express information in meaningful way but not everyone can recognize what is being conveyed through sign language. Thus the automatic translation of a sign language serves as basic need for overcoming many difficulties and providing convenience for impaired people in the developing era of technology. For many years, a lot of researchers have been working on developing the better algorithm for sign language communication using machine learning and computer vision techniques that passes through many stages such as pre-processing, segmentation, extraction of features and classification. But the efficient features can produce more effective and accurate results. This paper aims at comparing performance of different classifiers to deep convolutional neural network (CNN) on sign language dataset providing with and without local feature descriptor and bag of visual words. This is definitely a classification task for which CNN, Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) are being widely considered. CNN is capable of extracting and representing high-level abstractions in the dataset that results good accuracy but some traditional classifiers are also capable for that when providing with good features. We evaluate the performance of MLP and SVM without and with Speed up Robust Features (SURF) on the same data given to CNN. Results are also discussed in this paper that shows MLP and SVM employing descriptor gives high accuracy than CNN.","PeriodicalId":252781,"journal":{"name":"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"CNN and Traditional Classifiers Performance for Sign Language Recognition\",\"authors\":\"Sobia Fayyaz, Y. Ayaz\",\"doi\":\"10.1145/3310986.3311011\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Many people around the world are suffering from vocal and hearing disabilities and they communicate with others by actions rather than speech. They prefer sign language (hand gestures) to convey what revolves in their mind. Every language has some set of rules of grammar to express information in meaningful way but not everyone can recognize what is being conveyed through sign language. Thus the automatic translation of a sign language serves as basic need for overcoming many difficulties and providing convenience for impaired people in the developing era of technology. For many years, a lot of researchers have been working on developing the better algorithm for sign language communication using machine learning and computer vision techniques that passes through many stages such as pre-processing, segmentation, extraction of features and classification. But the efficient features can produce more effective and accurate results. This paper aims at comparing performance of different classifiers to deep convolutional neural network (CNN) on sign language dataset providing with and without local feature descriptor and bag of visual words. This is definitely a classification task for which CNN, Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) are being widely considered. CNN is capable of extracting and representing high-level abstractions in the dataset that results good accuracy but some traditional classifiers are also capable for that when providing with good features. We evaluate the performance of MLP and SVM without and with Speed up Robust Features (SURF) on the same data given to CNN. Results are also discussed in this paper that shows MLP and SVM employing descriptor gives high accuracy than CNN.\",\"PeriodicalId\":252781,\"journal\":{\"name\":\"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-01-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3310986.3311011\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Conference on Machine Learning and Soft Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3310986.3311011","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

摘要

世界上有许多人患有声音和听力障碍,他们通过行动而不是语言与他人交流。他们更喜欢用手语(手势)来表达他们的想法。每种语言都有一些语法规则,以有意义的方式表达信息,但并不是每个人都能认识到通过手语传达的信息。因此,手语自动翻译是在科技发展的时代,为残障人士克服重重困难、提供便利的基本需要。多年来,许多研究人员一直致力于利用机器学习和计算机视觉技术开发更好的手语交流算法,这些算法需要经过预处理、分割、特征提取和分类等多个阶段。但高效的特征可以产生更有效和准确的结果。本文旨在比较不同分类器与深度卷积神经网络(CNN)在手语数据集上的性能,分别提供和不提供局部特征描述符和视觉词包。这绝对是一个广泛考虑CNN、多层感知机(MLP)和支持向量机(SVM)的分类任务。CNN能够提取和表示数据集中的高级抽象,从而获得良好的准确性,但是一些传统的分类器在提供良好的特征时也能够做到这一点。在给定给CNN的相同数据上,我们评估了不带和带加速鲁棒特征(SURF)的MLP和SVM的性能。本文还讨论了使用描述符的MLP和SVM比CNN具有更高的准确率。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
CNN and Traditional Classifiers Performance for Sign Language Recognition
Many people around the world are suffering from vocal and hearing disabilities and they communicate with others by actions rather than speech. They prefer sign language (hand gestures) to convey what revolves in their mind. Every language has some set of rules of grammar to express information in meaningful way but not everyone can recognize what is being conveyed through sign language. Thus the automatic translation of a sign language serves as basic need for overcoming many difficulties and providing convenience for impaired people in the developing era of technology. For many years, a lot of researchers have been working on developing the better algorithm for sign language communication using machine learning and computer vision techniques that passes through many stages such as pre-processing, segmentation, extraction of features and classification. But the efficient features can produce more effective and accurate results. This paper aims at comparing performance of different classifiers to deep convolutional neural network (CNN) on sign language dataset providing with and without local feature descriptor and bag of visual words. This is definitely a classification task for which CNN, Multi-Layer Perceptron (MLP) and Support Vector Machine (SVM) are being widely considered. CNN is capable of extracting and representing high-level abstractions in the dataset that results good accuracy but some traditional classifiers are also capable for that when providing with good features. We evaluate the performance of MLP and SVM without and with Speed up Robust Features (SURF) on the same data given to CNN. Results are also discussed in this paper that shows MLP and SVM employing descriptor gives high accuracy than CNN.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信