{"title":"一维深度扫描作为传感器网络中人体姿态检测替代特征的研究","authors":"Maryam S. Rasoulidanesh, S. Payandeh","doi":"10.1155/2022/2267107","DOIUrl":null,"url":null,"abstract":"Inspired by the notion of swarm robotics, sensing, and minimalism, in this paper, we study and analyze how a collection of only 1D depth scans can be used as a part of the minimum feature for human body detection and its segmentation in a point cloud. In relation to the traditional approaches which require a complete point cloud model representation for skeleton model reconstruction, our proposed approach offers a lower computation and power consumption, especially in sensor and robotic networks. Our main objective is to investigate if the reduced number of training data through a collection of 1D scans of a subject is related to the rate of recognition and if it can be used to accurately detect the human body and its posture. The method takes advantage of the frequency components of the depth images (here, we refer to it as a 1D scan). To coordinate a collection of these 1D scans obtained through a sensor network, we also proposed a sensor scheduling framework. The framework is evaluated using two stationary depth sensors and a mobile depth sensor. The performance of our method was analyzed through movements and posture details of a subject having two relative orientations with respect to the sensors with two classes of postures, namely, walking and standing. The novelty of the paper can be summarized in 3 main points. Firstly, unlike deep learning methods, our approach would require a smaller dataset for training. Secondly, our case studies show that the method uses very limited training dataset and still can detect the unseen situation and reasonably estimate the orientation and detail of the posture. Finally, we propose an online scheduler to improve the energy efficiency of the network sensor and minimize the number of sensors required for surveillance monitoring by employing a mobile sensor to recover the occluded views of the stationary sensors. We showed that with the training data captured on 1 m from the camera, the algorithm can detect the detailed posture of the subject from 1, 2, 3, and 4 meters away from the sensor during the walking and standing with average accuracy of 93% and for different orientation with respect to the sensor by 71% accuracy.","PeriodicalId":14776,"journal":{"name":"J. Sensors","volume":"117 2","pages":"1-20"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"On Study of 1D Depth Scans as an Alternative Feature for Human Pose Detection in a Sensor Network\",\"authors\":\"Maryam S. Rasoulidanesh, S. Payandeh\",\"doi\":\"10.1155/2022/2267107\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Inspired by the notion of swarm robotics, sensing, and minimalism, in this paper, we study and analyze how a collection of only 1D depth scans can be used as a part of the minimum feature for human body detection and its segmentation in a point cloud. In relation to the traditional approaches which require a complete point cloud model representation for skeleton model reconstruction, our proposed approach offers a lower computation and power consumption, especially in sensor and robotic networks. Our main objective is to investigate if the reduced number of training data through a collection of 1D scans of a subject is related to the rate of recognition and if it can be used to accurately detect the human body and its posture. The method takes advantage of the frequency components of the depth images (here, we refer to it as a 1D scan). To coordinate a collection of these 1D scans obtained through a sensor network, we also proposed a sensor scheduling framework. The framework is evaluated using two stationary depth sensors and a mobile depth sensor. The performance of our method was analyzed through movements and posture details of a subject having two relative orientations with respect to the sensors with two classes of postures, namely, walking and standing. The novelty of the paper can be summarized in 3 main points. Firstly, unlike deep learning methods, our approach would require a smaller dataset for training. Secondly, our case studies show that the method uses very limited training dataset and still can detect the unseen situation and reasonably estimate the orientation and detail of the posture. Finally, we propose an online scheduler to improve the energy efficiency of the network sensor and minimize the number of sensors required for surveillance monitoring by employing a mobile sensor to recover the occluded views of the stationary sensors. We showed that with the training data captured on 1 m from the camera, the algorithm can detect the detailed posture of the subject from 1, 2, 3, and 4 meters away from the sensor during the walking and standing with average accuracy of 93% and for different orientation with respect to the sensor by 71% accuracy.\",\"PeriodicalId\":14776,\"journal\":{\"name\":\"J. Sensors\",\"volume\":\"117 2\",\"pages\":\"1-20\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Sensors\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1155/2022/2267107\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Sensors","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2022/2267107","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
On Study of 1D Depth Scans as an Alternative Feature for Human Pose Detection in a Sensor Network
Inspired by the notion of swarm robotics, sensing, and minimalism, in this paper, we study and analyze how a collection of only 1D depth scans can be used as a part of the minimum feature for human body detection and its segmentation in a point cloud. In relation to the traditional approaches which require a complete point cloud model representation for skeleton model reconstruction, our proposed approach offers a lower computation and power consumption, especially in sensor and robotic networks. Our main objective is to investigate if the reduced number of training data through a collection of 1D scans of a subject is related to the rate of recognition and if it can be used to accurately detect the human body and its posture. The method takes advantage of the frequency components of the depth images (here, we refer to it as a 1D scan). To coordinate a collection of these 1D scans obtained through a sensor network, we also proposed a sensor scheduling framework. The framework is evaluated using two stationary depth sensors and a mobile depth sensor. The performance of our method was analyzed through movements and posture details of a subject having two relative orientations with respect to the sensors with two classes of postures, namely, walking and standing. The novelty of the paper can be summarized in 3 main points. Firstly, unlike deep learning methods, our approach would require a smaller dataset for training. Secondly, our case studies show that the method uses very limited training dataset and still can detect the unseen situation and reasonably estimate the orientation and detail of the posture. Finally, we propose an online scheduler to improve the energy efficiency of the network sensor and minimize the number of sensors required for surveillance monitoring by employing a mobile sensor to recover the occluded views of the stationary sensors. We showed that with the training data captured on 1 m from the camera, the algorithm can detect the detailed posture of the subject from 1, 2, 3, and 4 meters away from the sensor during the walking and standing with average accuracy of 93% and for different orientation with respect to the sensor by 71% accuracy.