{"title":"Enhancing 3D Pose Estimation Accuracy from Multiple Camera Perspectives through Machine Learning Model Integration","authors":"Ervinas Gisleris, A. Serackis","doi":"10.1109/AIEEE58915.2023.10134772","DOIUrl":null,"url":null,"abstract":"In this investigation, we propose a machine learning approach to integrate estimations from two orthogonal camera views, separated by approximately 90 degrees, using a three-layer feed-forward neural network to refine and unify 3D pose estimations. The primary objective is to minimize the discrepancies between the estimated joint coordinates and the ground truth, consequently improving the overall accuracy of the 3D pose estimation process. Our neural network architecture comprises two hidden layers with the ReLU activation function and an output layer with the linear activation function to generate the final 3D coordinates of human skeleton joints. Integration of estimations from two orthogonal camera perspectives allows the model to account for occlusions, varying lighting conditions, and pose diversity, providing a more comprehensive representation of the 3D pose. The network is trained and evaluated on a public CMU Panoptic dataset that contains videos with a wide range of poses.","PeriodicalId":149255,"journal":{"name":"2023 IEEE 10th Jubilee Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE 10th Jubilee Workshop on Advances in Information, Electronic and Electrical Engineering (AIEEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AIEEE58915.2023.10134772","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In this investigation, we propose a machine learning approach to integrate estimations from two orthogonal camera views, separated by approximately 90 degrees, using a three-layer feed-forward neural network to refine and unify 3D pose estimations. The primary objective is to minimize the discrepancies between the estimated joint coordinates and the ground truth, consequently improving the overall accuracy of the 3D pose estimation process. Our neural network architecture comprises two hidden layers with the ReLU activation function and an output layer with the linear activation function to generate the final 3D coordinates of human skeleton joints. Integration of estimations from two orthogonal camera perspectives allows the model to account for occlusions, varying lighting conditions, and pose diversity, providing a more comprehensive representation of the 3D pose. The network is trained and evaluated on a public CMU Panoptic dataset that contains videos with a wide range of poses.