{"title":"Human Position Detection Using Point Cloud Data for Human-Robot Safety Systems","authors":"Nehal Amer, J. Humphries, Nitin Nandeshwar","doi":"10.1109/ICMERR56497.2022.10097801","DOIUrl":null,"url":null,"abstract":"Artificial Intelligence algorithms have become extremely fast at identifying humans in real time. However, as Industry moves more towards collaborative working with robots and machines, the identification of humans is not sufficient. Knowledge of the exact location, and trajectory of the human movement is needed so that safe collaborative environments are built. This study presents an architecture that can be used to identify humans and extract their position in the space. The proposed system uses the Mask R-CNN instance segmentation model for identification of humans. The pixels which belong to each human in the image captured are identified. Then, the extracted pixels are mapped to the depth and point cloud images obtained from Azure Kinect camera to extract the positional data for each human in space. The impact of this research paper is that we propose a solution that captures XYZ co-ordinates of humans in a space.","PeriodicalId":302481,"journal":{"name":"2022 7th International Conference on Mechanical Engineering and Robotics Research (ICMERR)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 7th International Conference on Mechanical Engineering and Robotics Research (ICMERR)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMERR56497.2022.10097801","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial Intelligence algorithms have become extremely fast at identifying humans in real time. However, as Industry moves more towards collaborative working with robots and machines, the identification of humans is not sufficient. Knowledge of the exact location, and trajectory of the human movement is needed so that safe collaborative environments are built. This study presents an architecture that can be used to identify humans and extract their position in the space. The proposed system uses the Mask R-CNN instance segmentation model for identification of humans. The pixels which belong to each human in the image captured are identified. Then, the extracted pixels are mapped to the depth and point cloud images obtained from Azure Kinect camera to extract the positional data for each human in space. The impact of this research paper is that we propose a solution that captures XYZ co-ordinates of humans in a space.