{"title":"通过决策后支持模块解决识别空中手势中区分大小写字符的歧义","authors":"Anish Monsley Kirupakaran, K. Yadav, R. Laskar","doi":"10.1109/NCC55593.2022.9806782","DOIUrl":null,"url":null,"abstract":"Unlike real-world objects which remains the same irrespective of the changes in size on a fixed/varying scale, few English alphabets become identical to each other because of case ambiguity. Recognizing alphabets becomes further complex when different characters are gesticulated with the same pattern or become similar due to the gesticulation style. The generalization ability of deep convolutional neural networks (DCNN) results in misclassifying these characters. To overcome this, we propose a two-stage recognition model that comprises of DCNN and advisor unit (AU) followed by a post-decision support module (P-DSM). It differentiates these similar characters based on actual gesticulated size and extracts features from the 1D, 2D perspective and captures the demographics in the gesticulation. This model is able to discriminate these similar characters with an accuracy of ~92% for the NITS hand gesture database. Experimenting with this on popular handwritten EMNIST database suggests that pre-processing steps followed in it make the characters lose their size information.","PeriodicalId":403870,"journal":{"name":"2022 National Conference on Communications (NCC)","volume":"47 50","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Resolving the ambiguity in recognizing case-sensitive characters gesticulated in mid-air through post-decision support modules\",\"authors\":\"Anish Monsley Kirupakaran, K. Yadav, R. Laskar\",\"doi\":\"10.1109/NCC55593.2022.9806782\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Unlike real-world objects which remains the same irrespective of the changes in size on a fixed/varying scale, few English alphabets become identical to each other because of case ambiguity. Recognizing alphabets becomes further complex when different characters are gesticulated with the same pattern or become similar due to the gesticulation style. The generalization ability of deep convolutional neural networks (DCNN) results in misclassifying these characters. To overcome this, we propose a two-stage recognition model that comprises of DCNN and advisor unit (AU) followed by a post-decision support module (P-DSM). It differentiates these similar characters based on actual gesticulated size and extracts features from the 1D, 2D perspective and captures the demographics in the gesticulation. This model is able to discriminate these similar characters with an accuracy of ~92% for the NITS hand gesture database. Experimenting with this on popular handwritten EMNIST database suggests that pre-processing steps followed in it make the characters lose their size information.\",\"PeriodicalId\":403870,\"journal\":{\"name\":\"2022 National Conference on Communications (NCC)\",\"volume\":\"47 50\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 National Conference on Communications (NCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/NCC55593.2022.9806782\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 National Conference on Communications (NCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/NCC55593.2022.9806782","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Resolving the ambiguity in recognizing case-sensitive characters gesticulated in mid-air through post-decision support modules
Unlike real-world objects which remains the same irrespective of the changes in size on a fixed/varying scale, few English alphabets become identical to each other because of case ambiguity. Recognizing alphabets becomes further complex when different characters are gesticulated with the same pattern or become similar due to the gesticulation style. The generalization ability of deep convolutional neural networks (DCNN) results in misclassifying these characters. To overcome this, we propose a two-stage recognition model that comprises of DCNN and advisor unit (AU) followed by a post-decision support module (P-DSM). It differentiates these similar characters based on actual gesticulated size and extracts features from the 1D, 2D perspective and captures the demographics in the gesticulation. This model is able to discriminate these similar characters with an accuracy of ~92% for the NITS hand gesture database. Experimenting with this on popular handwritten EMNIST database suggests that pre-processing steps followed in it make the characters lose their size information.