T. Prabakaran, L. Kumar, S. Ashabharathi, S. Prabhavathi, Maneesh Vilas Deshpande, M. Fahlevi
{"title":"Key Frame Extraction Analysis Based on Optimized Convolution Neural Network (OCNN) using Intensity Feature Selection (IFS)","authors":"T. Prabakaran, L. Kumar, S. Ashabharathi, S. Prabhavathi, Maneesh Vilas Deshpande, M. Fahlevi","doi":"10.1109/ICTACS56270.2022.9988474","DOIUrl":null,"url":null,"abstract":"The multimedia is playing role of timing frames in videos. The representation frame shows the intention on video definition. The keyframes the important factor for extraction information from video frames. The non-related frames is a problem for finding new key exposure. In this paper, we present a new method for extracting essential frames from motion capture data using Optimized Convolution Neural Network (OCNN) and Intensity Feature Selection (IFS) for better visualisation and understanding of motion content. It first removes noise from motion capture data using the Butterworth filter, then reduces the size via principal component analysis (PCA). Finding the zero-crosses of velocity in the main components yields the initial set of crucial frames. To avoid redundancy, the first batch of important frames is divided into identical poses. Experiments are based on data access from frames in the motion capture database, and experimental results suggest that crucial frames retrieved by our method can improve motion capture visualisation and comprehension.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTACS56270.2022.9988474","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The multimedia is playing role of timing frames in videos. The representation frame shows the intention on video definition. The keyframes the important factor for extraction information from video frames. The non-related frames is a problem for finding new key exposure. In this paper, we present a new method for extracting essential frames from motion capture data using Optimized Convolution Neural Network (OCNN) and Intensity Feature Selection (IFS) for better visualisation and understanding of motion content. It first removes noise from motion capture data using the Butterworth filter, then reduces the size via principal component analysis (PCA). Finding the zero-crosses of velocity in the main components yields the initial set of crucial frames. To avoid redundancy, the first batch of important frames is divided into identical poses. Experiments are based on data access from frames in the motion capture database, and experimental results suggest that crucial frames retrieved by our method can improve motion capture visualisation and comprehension.