{"title":"Eliminating the Repetitive Motions as a Preprocessing step for Fast Human Action Retrieval","authors":"Mohsen Ramezani, F. Yaghmaee","doi":"10.1109/ICCKE48569.2019.8965087","DOIUrl":null,"url":null,"abstract":"Today, video searching methods dropped behind the growth of using capturing devices. Action retrieval is a new research field which seeks to use the captured human action for searching the videos. As most human actions consist of similar motions which are repeated over time, we seek to propose a method for eliminating the repetitive motions before retrieving the videos. This method, as a preprocessing step, can decrease the volume of the retrieval computations for each video. Here, a function is used to calculate a value per each pixel as its movement energy. Then, CWT (Continuous Wavelet Transform) is used for mapping the response function of the points into the frequency space to find similar motion patterns more easier. The DTW (Dynamic Time Wrapping) is then applied on the new space to find similar frequency patterns (episodes) over time. Finally, one of the similar episodes, i.e. some sequential frames, remains for the retrieval computations and others are eliminated. The proposed method is evaluated on KTH, UCFYT, and HMDB datasets and results indicate the proper performance of the proposed method. Eliminating the repetitive motions results into significant reduction in retrieval computations and time.","PeriodicalId":6685,"journal":{"name":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","volume":"104 1","pages":"26-31"},"PeriodicalIF":0.0000,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 9th International Conference on Computer and Knowledge Engineering (ICCKE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCKE48569.2019.8965087","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Today, video searching methods dropped behind the growth of using capturing devices. Action retrieval is a new research field which seeks to use the captured human action for searching the videos. As most human actions consist of similar motions which are repeated over time, we seek to propose a method for eliminating the repetitive motions before retrieving the videos. This method, as a preprocessing step, can decrease the volume of the retrieval computations for each video. Here, a function is used to calculate a value per each pixel as its movement energy. Then, CWT (Continuous Wavelet Transform) is used for mapping the response function of the points into the frequency space to find similar motion patterns more easier. The DTW (Dynamic Time Wrapping) is then applied on the new space to find similar frequency patterns (episodes) over time. Finally, one of the similar episodes, i.e. some sequential frames, remains for the retrieval computations and others are eliminated. The proposed method is evaluated on KTH, UCFYT, and HMDB datasets and results indicate the proper performance of the proposed method. Eliminating the repetitive motions results into significant reduction in retrieval computations and time.