D. Ionescu, V. Suse, C. Gadea, B. Solomon, B. Ionescu, S. Islam
{"title":"一种用于虚拟环境手势控制的红外深度相机","authors":"D. Ionescu, V. Suse, C. Gadea, B. Solomon, B. Ionescu, S. Islam","doi":"10.1109/CIVEMSA.2013.6617388","DOIUrl":null,"url":null,"abstract":"Gesture Control dominates presently the research on new human computer interfaces. The domain covers both the sensors to capture gestures and also the driver software which interprets the gesture mapping it onto a robust command. More recently, there is a trend to use depth-mapping camera as the 2D cameras fall short in assuring the conditions of real-time robustness of the whole system. As image processing is at the core of the detection, recognition, and tracking the gesture, depth mapping sensors have to provide a depth image insensitive to illumination conditions. Thus depth-mapping cameras work in a certain wavelength of the infrared (IR) spectrum. In this paper, a novel real-time depth-mapping principle for an IR camera is introduced. The new IR camera architecture comprises an illuminator module which is pulse-modulated via a monotonic function using a cycle driven feedback loop for the control of laser intensity, while the reflected infrared light is captured in “slices” of the space in which the object of interest is situated. A reconfigurable hardware architecture unit calculates the depth slices and combines them in a depth-map of the object to be further used in the detection, tracking, and recognition of the gesture made by the user. Images of real objects are reconstructed in 3D based on the data obtained by the space-slicing technique, and a corresponding image processing algorithm builds the 3D map of the object in real-time. As this paper will show through a series of experiments, the camera can be used in a variety of domains, including for gesture control of 3D objects in virtual environments.","PeriodicalId":159100,"journal":{"name":"2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2013-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"An infrared-based depth camera for gesture-based control of virtual environments\",\"authors\":\"D. Ionescu, V. Suse, C. Gadea, B. Solomon, B. Ionescu, S. Islam\",\"doi\":\"10.1109/CIVEMSA.2013.6617388\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Gesture Control dominates presently the research on new human computer interfaces. The domain covers both the sensors to capture gestures and also the driver software which interprets the gesture mapping it onto a robust command. More recently, there is a trend to use depth-mapping camera as the 2D cameras fall short in assuring the conditions of real-time robustness of the whole system. As image processing is at the core of the detection, recognition, and tracking the gesture, depth mapping sensors have to provide a depth image insensitive to illumination conditions. Thus depth-mapping cameras work in a certain wavelength of the infrared (IR) spectrum. In this paper, a novel real-time depth-mapping principle for an IR camera is introduced. The new IR camera architecture comprises an illuminator module which is pulse-modulated via a monotonic function using a cycle driven feedback loop for the control of laser intensity, while the reflected infrared light is captured in “slices” of the space in which the object of interest is situated. A reconfigurable hardware architecture unit calculates the depth slices and combines them in a depth-map of the object to be further used in the detection, tracking, and recognition of the gesture made by the user. Images of real objects are reconstructed in 3D based on the data obtained by the space-slicing technique, and a corresponding image processing algorithm builds the 3D map of the object in real-time. As this paper will show through a series of experiments, the camera can be used in a variety of domains, including for gesture control of 3D objects in virtual environments.\",\"PeriodicalId\":159100,\"journal\":{\"name\":\"2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)\",\"volume\":\"89 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-07-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIVEMSA.2013.6617388\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2013 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIVEMSA.2013.6617388","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An infrared-based depth camera for gesture-based control of virtual environments
Gesture Control dominates presently the research on new human computer interfaces. The domain covers both the sensors to capture gestures and also the driver software which interprets the gesture mapping it onto a robust command. More recently, there is a trend to use depth-mapping camera as the 2D cameras fall short in assuring the conditions of real-time robustness of the whole system. As image processing is at the core of the detection, recognition, and tracking the gesture, depth mapping sensors have to provide a depth image insensitive to illumination conditions. Thus depth-mapping cameras work in a certain wavelength of the infrared (IR) spectrum. In this paper, a novel real-time depth-mapping principle for an IR camera is introduced. The new IR camera architecture comprises an illuminator module which is pulse-modulated via a monotonic function using a cycle driven feedback loop for the control of laser intensity, while the reflected infrared light is captured in “slices” of the space in which the object of interest is situated. A reconfigurable hardware architecture unit calculates the depth slices and combines them in a depth-map of the object to be further used in the detection, tracking, and recognition of the gesture made by the user. Images of real objects are reconstructed in 3D based on the data obtained by the space-slicing technique, and a corresponding image processing algorithm builds the 3D map of the object in real-time. As this paper will show through a series of experiments, the camera can be used in a variety of domains, including for gesture control of 3D objects in virtual environments.