M. S. Bhuvan, D. Rao, Siddhartha Jain, T. Ashwin, Ram Mohana Reddy Guddetti, S. Kulgod
{"title":"手语语法面部表情的检测与分析模型","authors":"M. S. Bhuvan, D. Rao, Siddhartha Jain, T. Ashwin, Ram Mohana Reddy Guddetti, S. Kulgod","doi":"10.1109/TENCONSPRING.2016.7519396","DOIUrl":null,"url":null,"abstract":"The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making.","PeriodicalId":166275,"journal":{"name":"2016 IEEE Region 10 Symposium (TENSYMP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":"{\"title\":\"Detection and analysis model for grammatical facial expressions in sign language\",\"authors\":\"M. S. Bhuvan, D. Rao, Siddhartha Jain, T. Ashwin, Ram Mohana Reddy Guddetti, S. Kulgod\",\"doi\":\"10.1109/TENCONSPRING.2016.7519396\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making.\",\"PeriodicalId\":166275,\"journal\":{\"name\":\"2016 IEEE Region 10 Symposium (TENSYMP)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-05-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"9\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE Region 10 Symposium (TENSYMP)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/TENCONSPRING.2016.7519396\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE Region 10 Symposium (TENSYMP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/TENCONSPRING.2016.7519396","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Detection and analysis model for grammatical facial expressions in sign language
The proposed research explores a relatively new area of expression detection through facial points in a sign language to enhance the computer interaction with the deaf and hard of hearing. The research mainly focuses on facial points collected from Kinect as basis for expression detection as opposed to numerous gesture based studies on sign language. This helps in deploying the applications in smart phones as it is feasible to capture facial point easily rather than hand gestures. Exhaustive experimentation is carried out with ten different machine learning algorithms for detecting nine different types of expression modeled as different binary classification problem for each expression. This is done for user dependent model and user independent model scenarios. The optimal classifier for each expression is found to outperform the current state-of-the-art techniques and has ROC area greater than 0.95 for each expression. It is found that user independent model's performance is comparable to user dependent model, hence is suggested as it is easy and efficient to deploy in practical applications. Finally, the importance of each facial point in detecting each type of expression has been mined, which can be instrumental for future research and for various application using facial points as basis for decision making.