Kurt Jacobs, Mehrdad Ghasiazgar, I. Venter, Reg Dodds
{"title":"基于深度学习的不同方向手部形状识别","authors":"Kurt Jacobs, Mehrdad Ghasiazgar, I. Venter, Reg Dodds","doi":"10.1145/2987491.2987524","DOIUrl":null,"url":null,"abstract":"A large number of Deaf people are unable to communicate by means of spoken language. Thus, a translation system that converts South African Sign Language to English and vice versa would be invaluable to the Deaf community. In order to recognise sign language gestures, five fundamental gesture parameters, namely, hand shape, hand orientation, hand motion, hand location and facial expressions, need to be recognised separately. The research in this paper aims to utilise deep learning techniques, specifically convolutional neural networks, to recognise a set of hand shapes in various orientations within a live video stream captured on an iPhone mobile device. The research forms part of a larger project that aims to automatically translate South African Sign Language into English and vice versa. The research proposed two approaches for classifying gestures, a two stage approach and single classifier approach. The former approach managed to achieve an average accuracy of 70% and the latter an average accuracy of 67%.","PeriodicalId":269578,"journal":{"name":"Research Conference of the South African Institute of Computer Scientists and Information Technologists","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Hand Gesture Recognition of Hand Shapes in Varied Orientations using Deep Learning\",\"authors\":\"Kurt Jacobs, Mehrdad Ghasiazgar, I. Venter, Reg Dodds\",\"doi\":\"10.1145/2987491.2987524\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A large number of Deaf people are unable to communicate by means of spoken language. Thus, a translation system that converts South African Sign Language to English and vice versa would be invaluable to the Deaf community. In order to recognise sign language gestures, five fundamental gesture parameters, namely, hand shape, hand orientation, hand motion, hand location and facial expressions, need to be recognised separately. The research in this paper aims to utilise deep learning techniques, specifically convolutional neural networks, to recognise a set of hand shapes in various orientations within a live video stream captured on an iPhone mobile device. The research forms part of a larger project that aims to automatically translate South African Sign Language into English and vice versa. The research proposed two approaches for classifying gestures, a two stage approach and single classifier approach. The former approach managed to achieve an average accuracy of 70% and the latter an average accuracy of 67%.\",\"PeriodicalId\":269578,\"journal\":{\"name\":\"Research Conference of the South African Institute of Computer Scientists and Information Technologists\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Research Conference of the South African Institute of Computer Scientists and Information Technologists\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2987491.2987524\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Research Conference of the South African Institute of Computer Scientists and Information Technologists","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2987491.2987524","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Hand Gesture Recognition of Hand Shapes in Varied Orientations using Deep Learning
A large number of Deaf people are unable to communicate by means of spoken language. Thus, a translation system that converts South African Sign Language to English and vice versa would be invaluable to the Deaf community. In order to recognise sign language gestures, five fundamental gesture parameters, namely, hand shape, hand orientation, hand motion, hand location and facial expressions, need to be recognised separately. The research in this paper aims to utilise deep learning techniques, specifically convolutional neural networks, to recognise a set of hand shapes in various orientations within a live video stream captured on an iPhone mobile device. The research forms part of a larger project that aims to automatically translate South African Sign Language into English and vice versa. The research proposed two approaches for classifying gestures, a two stage approach and single classifier approach. The former approach managed to achieve an average accuracy of 70% and the latter an average accuracy of 67%.