Chengfeng Wang , Qin Ma , Dehai Zhu , Hong Chen , Zhoutuo Yang
{"title":"Real-time control of 3D virtual human motion using a depth-sensing camera for agricultural machinery training","authors":"Chengfeng Wang , Qin Ma , Dehai Zhu , Hong Chen , Zhoutuo Yang","doi":"10.1016/j.mcm.2012.12.026","DOIUrl":null,"url":null,"abstract":"<div><p>To recreate human movements in a virtual environment in real time, we propose a new method for real-time tracking of 3D virtual full-body motion using a depth-sensing camera. The method uses natural interaction and a non-contact mode. The 3D virtual environment was constructed using a 3D graphics engine and human joint data were calculated using images acquired from a Prime Sense depth-sensing camera. Then skeletal data for the human model in a skinned mesh animation were separated by improving the mesh modules using a 3D graphics engine. Finally, motion data from the depth sensor were combined with joint data for the human model to yield full-body control of a virtual human (VH). Experimental results show that the proposed method can drive VH full-body movements in real time based on motion-sensing data. The method was applied in virtual driving training for agricultural machinery. Trainees can become familiar with the basic operations required for driving agricultural machinery using full-body motion instead of a mouse and keyboard. The training system is inexpensive and has high safety and a strong sense of immersion.</p></div>","PeriodicalId":49872,"journal":{"name":"Mathematical and Computer Modelling","volume":"58 3","pages":"Pages 782-789"},"PeriodicalIF":0.0000,"publicationDate":"2013-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1016/j.mcm.2012.12.026","citationCount":"12","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Mathematical and Computer Modelling","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0895717712003706","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 12
Abstract
To recreate human movements in a virtual environment in real time, we propose a new method for real-time tracking of 3D virtual full-body motion using a depth-sensing camera. The method uses natural interaction and a non-contact mode. The 3D virtual environment was constructed using a 3D graphics engine and human joint data were calculated using images acquired from a Prime Sense depth-sensing camera. Then skeletal data for the human model in a skinned mesh animation were separated by improving the mesh modules using a 3D graphics engine. Finally, motion data from the depth sensor were combined with joint data for the human model to yield full-body control of a virtual human (VH). Experimental results show that the proposed method can drive VH full-body movements in real time based on motion-sensing data. The method was applied in virtual driving training for agricultural machinery. Trainees can become familiar with the basic operations required for driving agricultural machinery using full-body motion instead of a mouse and keyboard. The training system is inexpensive and has high safety and a strong sense of immersion.