S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua
{"title":"手指手势早期识别与RGB或深度相机统一框架","authors":"S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua","doi":"10.1145/2948910.2948947","DOIUrl":null,"url":null,"abstract":"This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.","PeriodicalId":381334,"journal":{"name":"Proceedings of the 3rd International Symposium on Movement and Computing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Fingers gestures early-recognition with a unified framework for RGB or depth camera\",\"authors\":\"S. Manitsaris, A. Tsagaris, A. Glushkova, F. Moutarde, Frédéric Bevilacqua\",\"doi\":\"10.1145/2948910.2948947\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.\",\"PeriodicalId\":381334,\"journal\":{\"name\":\"Proceedings of the 3rd International Symposium on Movement and Computing\",\"volume\":\"108 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-07-05\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 3rd International Symposium on Movement and Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/2948910.2948947\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 3rd International Symposium on Movement and Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2948910.2948947","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Fingers gestures early-recognition with a unified framework for RGB or depth camera
This paper presents a unified framework computer vision approach for finger gesture early recognition and interaction that can be applied on sequences of either RGB or depth images without any supervised skeleton extraction. Either RGB or time-of-flight cameras can be used to capture finger motions. The hand detection is based on a skin color model for color images or distance slicing for depth images. A unique hand model is used for the finger detection and identification. Static (fingerings) and dynamic (sequence and/or combination of fingerings) patterns can be early-recognized based on one-shot learning approach using a modified Hidden Markov Models approach. The recognition accuracy is evaluated in two different applications: musical and robotic interaction. In the first case standardized basic piano-like finger gestures (ascending/descending scales, ascending/descending arpeggio) are used to evaluate the performance of the system. In the second case, both standardized and user-defined gestures (driving, waypoints etc.) are recognized and used to interactively control an automated guided vehicle.