{"title":"An Adaptive Approach for Salient Motion in Dynamic Scenes","authors":"Min Liu, Weizhong Liu, Daoli Zhang, Zhuoming Feng","doi":"10.1109/MINES.2012.58","DOIUrl":null,"url":null,"abstract":"We propose an adaptive approach for salient motion in dynamic scenes, which models each video clip of τ frames with the dynamic texture(DT) model in a holistic manner. The order of the DT model is chosen adaptively by evaluating the increment of singular entropy. A simple and computationally efficient formula is proposed to measure observability. The formula is related to time-domain eigenvalues and eigenvectors, but the eigenvalue decomposition operation is not needed. The foreground-background segmentation can be obtained by thresholding the observability value of each pixel location. Our proposed method is tested on a various sequences set. Its computational efficiency outperforms the state-of-the-art methods and its equal error rate (EER) is lower than most current methods.","PeriodicalId":208089,"journal":{"name":"2012 Fourth International Conference on Multimedia Information Networking and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2012-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2012 Fourth International Conference on Multimedia Information Networking and Security","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MINES.2012.58","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
We propose an adaptive approach for salient motion in dynamic scenes, which models each video clip of τ frames with the dynamic texture(DT) model in a holistic manner. The order of the DT model is chosen adaptively by evaluating the increment of singular entropy. A simple and computationally efficient formula is proposed to measure observability. The formula is related to time-domain eigenvalues and eigenvectors, but the eigenvalue decomposition operation is not needed. The foreground-background segmentation can be obtained by thresholding the observability value of each pixel location. Our proposed method is tested on a various sequences set. Its computational efficiency outperforms the state-of-the-art methods and its equal error rate (EER) is lower than most current methods.