{"title":"美国手语的卷积神经网络手势识别","authors":"Shruti Chavan, Xinrui Yu, J. Saniie","doi":"10.1109/EIT51626.2021.9491897","DOIUrl":null,"url":null,"abstract":"With the advancements in the computer vision technology, learning and using sign languages to communicate with deaf and mute people has become easier. Exciting research is ongoing for providing a global platform for communication in different sign languages. In this paper, we present a Deep Learning based approach to recognize a sign performed in American Sign Language by capturing an image as input. The system can predict the signs of 0 to 9 digits performed by the user. By utilizing image processing to convert RGB data to grayscale images, efficient reduction is achieved in the storage requirements and training time of the Convolutional Neural Network. The objective of the experiment is to find a mix of Image Processing and Deep Learning Architecture with lesser complexity to deploy the system in mobile applications or embedded single board computers. The database is trained from scratch using smaller networks as LeNet-5 and AlexNet as well as deeper network such as Vgg16 and MobileNet v2. The comparison of the recognition accuracies is discussed in the paper. The final selected architecture has only 10 layers including a dropout layer which boosted the training accuracy to 91.37% and testing accuracy to 87.5%.","PeriodicalId":162816,"journal":{"name":"2021 IEEE International Conference on Electro Information Technology (EIT)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"10","resultStr":"{\"title\":\"Convolutional Neural Network Hand Gesture Recognition for American Sign Language\",\"authors\":\"Shruti Chavan, Xinrui Yu, J. Saniie\",\"doi\":\"10.1109/EIT51626.2021.9491897\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the advancements in the computer vision technology, learning and using sign languages to communicate with deaf and mute people has become easier. Exciting research is ongoing for providing a global platform for communication in different sign languages. In this paper, we present a Deep Learning based approach to recognize a sign performed in American Sign Language by capturing an image as input. The system can predict the signs of 0 to 9 digits performed by the user. By utilizing image processing to convert RGB data to grayscale images, efficient reduction is achieved in the storage requirements and training time of the Convolutional Neural Network. The objective of the experiment is to find a mix of Image Processing and Deep Learning Architecture with lesser complexity to deploy the system in mobile applications or embedded single board computers. The database is trained from scratch using smaller networks as LeNet-5 and AlexNet as well as deeper network such as Vgg16 and MobileNet v2. The comparison of the recognition accuracies is discussed in the paper. The final selected architecture has only 10 layers including a dropout layer which boosted the training accuracy to 91.37% and testing accuracy to 87.5%.\",\"PeriodicalId\":162816,\"journal\":{\"name\":\"2021 IEEE International Conference on Electro Information Technology (EIT)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-05-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"10\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE International Conference on Electro Information Technology (EIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EIT51626.2021.9491897\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Electro Information Technology (EIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EIT51626.2021.9491897","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Convolutional Neural Network Hand Gesture Recognition for American Sign Language
With the advancements in the computer vision technology, learning and using sign languages to communicate with deaf and mute people has become easier. Exciting research is ongoing for providing a global platform for communication in different sign languages. In this paper, we present a Deep Learning based approach to recognize a sign performed in American Sign Language by capturing an image as input. The system can predict the signs of 0 to 9 digits performed by the user. By utilizing image processing to convert RGB data to grayscale images, efficient reduction is achieved in the storage requirements and training time of the Convolutional Neural Network. The objective of the experiment is to find a mix of Image Processing and Deep Learning Architecture with lesser complexity to deploy the system in mobile applications or embedded single board computers. The database is trained from scratch using smaller networks as LeNet-5 and AlexNet as well as deeper network such as Vgg16 and MobileNet v2. The comparison of the recognition accuracies is discussed in the paper. The final selected architecture has only 10 layers including a dropout layer which boosted the training accuracy to 91.37% and testing accuracy to 87.5%.