{"title":"利用图像处理技术从指尖位置生成三维手部模型","authors":"Natthapach Anuwattananon, S. Ruengittinun","doi":"10.1109/Ubi-Media.2019.00020","DOIUrl":null,"url":null,"abstract":"A gesture from hands and fingers have rich meanings in communication even without a word of sound. It would be very useful if a computer can understand a hand gesture. Hence, we can use a hand gesture to communicate with a robot and perform certain activities. This study focuses on tracking the position of each fingertip and palm to make a computer knows the gesture of a hand. The proposed solution was initially implemented using a MS Kinect camera while capturing a depth image of a human hand. Then, we applied some image processing algorithms to track the positions of fingertips. Finally, the result was visualized in a real-time 3D hand model based on the movements/signs given by a human hand. The experiment results indicate that the proposed approach can literally track the positions of a fingertip.","PeriodicalId":259542,"journal":{"name":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generating a 3D Hand Model from Position of Fingertip Using Image Processing Technique\",\"authors\":\"Natthapach Anuwattananon, S. Ruengittinun\",\"doi\":\"10.1109/Ubi-Media.2019.00020\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A gesture from hands and fingers have rich meanings in communication even without a word of sound. It would be very useful if a computer can understand a hand gesture. Hence, we can use a hand gesture to communicate with a robot and perform certain activities. This study focuses on tracking the position of each fingertip and palm to make a computer knows the gesture of a hand. The proposed solution was initially implemented using a MS Kinect camera while capturing a depth image of a human hand. Then, we applied some image processing algorithms to track the positions of fingertips. Finally, the result was visualized in a real-time 3D hand model based on the movements/signs given by a human hand. The experiment results indicate that the proposed approach can literally track the positions of a fingertip.\",\"PeriodicalId\":259542,\"journal\":{\"name\":\"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/Ubi-Media.2019.00020\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Twelfth International Conference on Ubi-Media Computing (Ubi-Media)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/Ubi-Media.2019.00020","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generating a 3D Hand Model from Position of Fingertip Using Image Processing Technique
A gesture from hands and fingers have rich meanings in communication even without a word of sound. It would be very useful if a computer can understand a hand gesture. Hence, we can use a hand gesture to communicate with a robot and perform certain activities. This study focuses on tracking the position of each fingertip and palm to make a computer knows the gesture of a hand. The proposed solution was initially implemented using a MS Kinect camera while capturing a depth image of a human hand. Then, we applied some image processing algorithms to track the positions of fingertips. Finally, the result was visualized in a real-time 3D hand model based on the movements/signs given by a human hand. The experiment results indicate that the proposed approach can literally track the positions of a fingertip.