{"title":"Est3D2Real-estimated 3D-to-real数据嵌入用于实时手语识别器","authors":"Kishore P.V.V. , Anil Kumar D.","doi":"10.1016/j.patrec.2025.05.012","DOIUrl":null,"url":null,"abstract":"<div><div>Human pose estimation predicts 3D skeletal joints from 2D video data. These estimated 3D joints are sensitive to video data anomalies, posing a threat to applications such as real-time sign language recognition. The challenge lies in the failure of the estimation model to output pose vectors during the signing process, which significantly impacts downstream classification tasks. To address this issue, we propose the development of a lightweight estimated 3D-to-real data embedding network (Est3D2Real). This network is designed to learn the relationships between the outputs of the pose estimation framework and a 3D motion capture system. Est3D2Real is a four-layer fully connected network, consisting of one input layer, two hidden layers, and one output layer. It employs the Mean Squared Error (MSE) loss function to minimize the distance between the two modalities. The trained Est3D2Real model ensures minimal joint loss in real-time downstream classification tasks. Validation is performed on a 100-gloss 3D sign language dataset, captured using both motion capture and MediaPipe frameworks. Subsequent downstream sign classifiers built on top of the trained Est3D2Real model have shown an approximate improvement of 28%. The code with small datasets is made available at <span><span>https://github.com/pvvkishore/Est3D2Real_SL_MediaPipe_2_Motion_Capture</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":54638,"journal":{"name":"Pattern Recognition Letters","volume":"196 ","pages":"Pages 86-92"},"PeriodicalIF":3.3000,"publicationDate":"2025-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Est3D2Real-estimated 3D-to-real data embeddings for real time sign language recognizer\",\"authors\":\"Kishore P.V.V. , Anil Kumar D.\",\"doi\":\"10.1016/j.patrec.2025.05.012\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>Human pose estimation predicts 3D skeletal joints from 2D video data. These estimated 3D joints are sensitive to video data anomalies, posing a threat to applications such as real-time sign language recognition. The challenge lies in the failure of the estimation model to output pose vectors during the signing process, which significantly impacts downstream classification tasks. To address this issue, we propose the development of a lightweight estimated 3D-to-real data embedding network (Est3D2Real). This network is designed to learn the relationships between the outputs of the pose estimation framework and a 3D motion capture system. Est3D2Real is a four-layer fully connected network, consisting of one input layer, two hidden layers, and one output layer. It employs the Mean Squared Error (MSE) loss function to minimize the distance between the two modalities. The trained Est3D2Real model ensures minimal joint loss in real-time downstream classification tasks. Validation is performed on a 100-gloss 3D sign language dataset, captured using both motion capture and MediaPipe frameworks. Subsequent downstream sign classifiers built on top of the trained Est3D2Real model have shown an approximate improvement of 28%. The code with small datasets is made available at <span><span>https://github.com/pvvkishore/Est3D2Real_SL_MediaPipe_2_Motion_Capture</span><svg><path></path></svg></span>.</div></div>\",\"PeriodicalId\":54638,\"journal\":{\"name\":\"Pattern Recognition Letters\",\"volume\":\"196 \",\"pages\":\"Pages 86-92\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2025-06-09\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Pattern Recognition Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0167865525002077\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Pattern Recognition Letters","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0167865525002077","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Est3D2Real-estimated 3D-to-real data embeddings for real time sign language recognizer
Human pose estimation predicts 3D skeletal joints from 2D video data. These estimated 3D joints are sensitive to video data anomalies, posing a threat to applications such as real-time sign language recognition. The challenge lies in the failure of the estimation model to output pose vectors during the signing process, which significantly impacts downstream classification tasks. To address this issue, we propose the development of a lightweight estimated 3D-to-real data embedding network (Est3D2Real). This network is designed to learn the relationships between the outputs of the pose estimation framework and a 3D motion capture system. Est3D2Real is a four-layer fully connected network, consisting of one input layer, two hidden layers, and one output layer. It employs the Mean Squared Error (MSE) loss function to minimize the distance between the two modalities. The trained Est3D2Real model ensures minimal joint loss in real-time downstream classification tasks. Validation is performed on a 100-gloss 3D sign language dataset, captured using both motion capture and MediaPipe frameworks. Subsequent downstream sign classifiers built on top of the trained Est3D2Real model have shown an approximate improvement of 28%. The code with small datasets is made available at https://github.com/pvvkishore/Est3D2Real_SL_MediaPipe_2_Motion_Capture.
期刊介绍:
Pattern Recognition Letters aims at rapid publication of concise articles of a broad interest in pattern recognition.
Subject areas include all the current fields of interest represented by the Technical Committees of the International Association of Pattern Recognition, and other developing themes involving learning and recognition.