Farah Zakiyah Rahmanti , Moch. Iskandar Riansyah , Oddy Virgantara Putra , Eko Mulyanto Yuniarno , Mauridhi Hery Purnomo
{"title":"3D human pose point cloud data of light detection and ranging (LiDAR)","authors":"Farah Zakiyah Rahmanti , Moch. Iskandar Riansyah , Oddy Virgantara Putra , Eko Mulyanto Yuniarno , Mauridhi Hery Purnomo","doi":"10.1016/j.dib.2025.112043","DOIUrl":null,"url":null,"abstract":"<div><div>3D Light Detection and Ranging (LiDAR) sensors are closely related to computer vision and deep learning. 3D LiDAR sensors are commonly embedded in smart vehicles to segment humans, cars, trucks, motors, and other objects. However, 3D LiDAR can also be used indoors to predict human poses that are more friendly to a person's privacy because 3D LiDAR does not capture facial images, but it produces data in the form of point clouds. The point cloud produces spatial, geometric, and temporal information which can be used to predict, detect, and classify human poses and activities. The data output from 3D LiDAR, which includes spatial and temporal data, is in PCAP (.pcap) and JSON (.json) formats. The PCAP file contains the sequence frame of the 3D human pose point cloud, and the JSON file contains the metadata. Each human pose class label has one PCAP file and one JSON file. The raw spatio-temporal data must be processed into PCD format as a 3D human pose point cloud dataset for each human pose.</div><div>The total human pose dataset is 1400 3D point cloud data with PCD format (.pcd) used for the training and testing process in deep learning, consisting of four human pose labels. The label classes are hands-to-the-side, sit-down, squat-down, and stand-up human poses, with each class having 280 3D point cloud data used as training data. While the test data amounted to 280 3D point cloud data. The data collection process uses 3D LiDAR, a tripod, a personal computer/laptop, and a talent, demonstrating basic human poses. The 3D LiDAR used is OS1, a product of Ouster, which has a range of 90–200 m, 128 channels of resolution, and a temperature of -40 – 60° C. For talent, there is one person and male gender in this current shooting. However, in its development, it can also take female or children or elderly talent to enrich the human pose dataset. The talent is between 30 and 40 years old. The distance between the 3D LiDAR and the talent position is 120 cm. Data collection took place from 10:00 a.m. to 1:00 pm. indoors.</div><div>This dataset is used for human pose prediction using one of the deep learning algorithms, Convolutional Neural Network (CNN). However, the developers can also use other deep learning algorithms such as transformers, Graph Neural Network (GNN), etc.</div></div>","PeriodicalId":10973,"journal":{"name":"Data in Brief","volume":"62 ","pages":"Article 112043"},"PeriodicalIF":1.4000,"publicationDate":"2025-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Data in Brief","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2352340925007656","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
3D Light Detection and Ranging (LiDAR) sensors are closely related to computer vision and deep learning. 3D LiDAR sensors are commonly embedded in smart vehicles to segment humans, cars, trucks, motors, and other objects. However, 3D LiDAR can also be used indoors to predict human poses that are more friendly to a person's privacy because 3D LiDAR does not capture facial images, but it produces data in the form of point clouds. The point cloud produces spatial, geometric, and temporal information which can be used to predict, detect, and classify human poses and activities. The data output from 3D LiDAR, which includes spatial and temporal data, is in PCAP (.pcap) and JSON (.json) formats. The PCAP file contains the sequence frame of the 3D human pose point cloud, and the JSON file contains the metadata. Each human pose class label has one PCAP file and one JSON file. The raw spatio-temporal data must be processed into PCD format as a 3D human pose point cloud dataset for each human pose.
The total human pose dataset is 1400 3D point cloud data with PCD format (.pcd) used for the training and testing process in deep learning, consisting of four human pose labels. The label classes are hands-to-the-side, sit-down, squat-down, and stand-up human poses, with each class having 280 3D point cloud data used as training data. While the test data amounted to 280 3D point cloud data. The data collection process uses 3D LiDAR, a tripod, a personal computer/laptop, and a talent, demonstrating basic human poses. The 3D LiDAR used is OS1, a product of Ouster, which has a range of 90–200 m, 128 channels of resolution, and a temperature of -40 – 60° C. For talent, there is one person and male gender in this current shooting. However, in its development, it can also take female or children or elderly talent to enrich the human pose dataset. The talent is between 30 and 40 years old. The distance between the 3D LiDAR and the talent position is 120 cm. Data collection took place from 10:00 a.m. to 1:00 pm. indoors.
This dataset is used for human pose prediction using one of the deep learning algorithms, Convolutional Neural Network (CNN). However, the developers can also use other deep learning algorithms such as transformers, Graph Neural Network (GNN), etc.
期刊介绍:
Data in Brief provides a way for researchers to easily share and reuse each other''s datasets by publishing data articles that: -Thoroughly describe your data, facilitating reproducibility. -Make your data, which is often buried in supplementary material, easier to find. -Increase traffic towards associated research articles and data, leading to more citations. -Open up doors for new collaborations. Because you never know what data will be useful to someone else, Data in Brief welcomes submissions that describe data from all research areas.