B. Chowdary, Ajay Purshotam Thota, A. Sreeja, Kotla Nithin Reddy, Karanam Sai Chandana
{"title":"Sign Language Detection and Recognition using CNN","authors":"B. Chowdary, Ajay Purshotam Thota, A. Sreeja, Kotla Nithin Reddy, Karanam Sai Chandana","doi":"10.1109/ICSCSS57650.2023.10169225","DOIUrl":null,"url":null,"abstract":"Human motion detection in the film is the focus of this study. In contrast to the current trend of representing activities through the statistics of local video characteristics, a depiction drawn from human posture is more beneficial. To that end, the authors suggest a novel predictor for action detection using Convolutional Neural Networks (P-CNNs) based on the user’s poses. The description collects data on human movement and looks along bodily component lines. The authors utilized PCNN features that were obtained from both automatically estimated and manually labeled human postures. This study also explores different temporal aggregation methods and conducts experiments. The proposed approach consistently outperforms the state-of-the-art dataset.","PeriodicalId":217957,"journal":{"name":"2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Sustainable Computing and Smart Systems (ICSCSS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSCSS57650.2023.10169225","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Human motion detection in the film is the focus of this study. In contrast to the current trend of representing activities through the statistics of local video characteristics, a depiction drawn from human posture is more beneficial. To that end, the authors suggest a novel predictor for action detection using Convolutional Neural Networks (P-CNNs) based on the user’s poses. The description collects data on human movement and looks along bodily component lines. The authors utilized PCNN features that were obtained from both automatically estimated and manually labeled human postures. This study also explores different temporal aggregation methods and conducts experiments. The proposed approach consistently outperforms the state-of-the-art dataset.