Ig-Jae Kim, Shwan Lee, S. Ahn, Yong-Moo Kwon, Hyoung-Gon Kim
{"title":"3D tracking of multi-objects using color and stereo for HCI","authors":"Ig-Jae Kim, Shwan Lee, S. Ahn, Yong-Moo Kwon, Hyoung-Gon Kim","doi":"10.1109/ICIP.2001.958105","DOIUrl":null,"url":null,"abstract":"We present a 3D tracking method of multi-objects by color and stereo. The results are applied to navigation and manipulation in a virtual environment. We choose a human face and hands as tracking targets for HCI. To extract the area of face and hands in a complex background, we transform an input color image using the GSCD (generalized skin color distribution). Based on the transformed image, we detect the area of face and hands by WUPC. Furthermore, the face and hands candidate region are estimated by a Kalman filter in the following frame. Then we can extract the depth information about the only corresponding region, respectively, by stereo matching. Finally, we can navigate and manipulate objects based on the extracted 3D information of face and hands naturally in the virtual environment without any additional device.","PeriodicalId":291827,"journal":{"name":"Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICIP.2001.958105","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
We present a 3D tracking method of multi-objects by color and stereo. The results are applied to navigation and manipulation in a virtual environment. We choose a human face and hands as tracking targets for HCI. To extract the area of face and hands in a complex background, we transform an input color image using the GSCD (generalized skin color distribution). Based on the transformed image, we detect the area of face and hands by WUPC. Furthermore, the face and hands candidate region are estimated by a Kalman filter in the following frame. Then we can extract the depth information about the only corresponding region, respectively, by stereo matching. Finally, we can navigate and manipulate objects based on the extracted 3D information of face and hands naturally in the virtual environment without any additional device.