{"title":"面向人机交互的瑜伽手势识别自适应系统","authors":"Priyanka Choudhary, S. Tazi","doi":"10.1109/ICIIS51140.2020.9342678","DOIUrl":null,"url":null,"abstract":"The purpose of the research is to validate the potential of Yogic Hand Gestures in a well-formed human-computer interface with a real-time image sequence taken on a video recording device to trace the potential subject region(PSR) spontaneously, essentially, the hand region with the help of skin detection algorithm, and detect, and perceive hand gestures for human-computer interaction. To detect skin, we use skin colour detection and softening to remove extra background information from the image, and then use background subtraction to detect the PSR. Moreover, to avoid the background information, we use the kernelised correlation filters (KCF) algorithm to track the detected PSR. The image size of the PSR is then resized to 50px * 50px and then fed into the deep convolutional neural network (CNN) to identify eight yogic hand gestures. The deep CNN architecture developed in this study that is a modified VGGNet. The above process of tracking and recognition is repeated with a ranking algorithm to produce a real-time impression, and the system’s execution continues until the hand leaves the camera range. While recognising the gesture primarily, it adds the top-ranked image captures to add into the sample pool for future training, the training data set reaches a recognition rate of 99.00%, and the test data set has a recognition rate of 95.89%, which represents the feasibility of the practical application. The implemented proof of concept and the custom yogic gesture dataset, namely the YoGiR-1 dataset, are availed on request.","PeriodicalId":352858,"journal":{"name":"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"An Adaptive System of Yogic Gesture Recognition for Human Computer Interaction\",\"authors\":\"Priyanka Choudhary, S. Tazi\",\"doi\":\"10.1109/ICIIS51140.2020.9342678\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The purpose of the research is to validate the potential of Yogic Hand Gestures in a well-formed human-computer interface with a real-time image sequence taken on a video recording device to trace the potential subject region(PSR) spontaneously, essentially, the hand region with the help of skin detection algorithm, and detect, and perceive hand gestures for human-computer interaction. To detect skin, we use skin colour detection and softening to remove extra background information from the image, and then use background subtraction to detect the PSR. Moreover, to avoid the background information, we use the kernelised correlation filters (KCF) algorithm to track the detected PSR. The image size of the PSR is then resized to 50px * 50px and then fed into the deep convolutional neural network (CNN) to identify eight yogic hand gestures. The deep CNN architecture developed in this study that is a modified VGGNet. The above process of tracking and recognition is repeated with a ranking algorithm to produce a real-time impression, and the system’s execution continues until the hand leaves the camera range. While recognising the gesture primarily, it adds the top-ranked image captures to add into the sample pool for future training, the training data set reaches a recognition rate of 99.00%, and the test data set has a recognition rate of 95.89%, which represents the feasibility of the practical application. The implemented proof of concept and the custom yogic gesture dataset, namely the YoGiR-1 dataset, are availed on request.\",\"PeriodicalId\":352858,\"journal\":{\"name\":\"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-11-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICIIS51140.2020.9342678\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 15th International Conference on Industrial and Information Systems (ICIIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIIS51140.2020.9342678","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Adaptive System of Yogic Gesture Recognition for Human Computer Interaction
The purpose of the research is to validate the potential of Yogic Hand Gestures in a well-formed human-computer interface with a real-time image sequence taken on a video recording device to trace the potential subject region(PSR) spontaneously, essentially, the hand region with the help of skin detection algorithm, and detect, and perceive hand gestures for human-computer interaction. To detect skin, we use skin colour detection and softening to remove extra background information from the image, and then use background subtraction to detect the PSR. Moreover, to avoid the background information, we use the kernelised correlation filters (KCF) algorithm to track the detected PSR. The image size of the PSR is then resized to 50px * 50px and then fed into the deep convolutional neural network (CNN) to identify eight yogic hand gestures. The deep CNN architecture developed in this study that is a modified VGGNet. The above process of tracking and recognition is repeated with a ranking algorithm to produce a real-time impression, and the system’s execution continues until the hand leaves the camera range. While recognising the gesture primarily, it adds the top-ranked image captures to add into the sample pool for future training, the training data set reaches a recognition rate of 99.00%, and the test data set has a recognition rate of 95.89%, which represents the feasibility of the practical application. The implemented proof of concept and the custom yogic gesture dataset, namely the YoGiR-1 dataset, are availed on request.