2013 Visual Communications and Image Processing (VCIP)最新文献

筛选
英文 中文
Compressive video sampling from a union of data-driven subspaces 从数据驱动子空间的联合压缩视频采样
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706390
Yong Li, H. Xiong, Xinwei Ye
{"title":"Compressive video sampling from a union of data-driven subspaces","authors":"Yong Li, H. Xiong, Xinwei Ye","doi":"10.1109/VCIP.2013.6706390","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706390","url":null,"abstract":"Recently, compressive sampling (CS) is an active research field of signal processing. To further decrease the necessary measurements and get more efficient recovery of a signal x, recent approaches assume that x lives in a union of subspaces (UoS). Unlike previous approaches, this paper proposes a novel method to sample and recover an unknown signal from a union of data-driven subspaces (UoDS). Instead of a fix set of supports, this UoDS is learned from classified signal series which are uniquely formed by block matching. The basis of these data-driven subspaces is regularized after dimensionality reduction by principal component extraction. A corresponding recovery solution with provable performance guarantees is also given, which takes full advantage of block-sparsity structure and improves the recovery efficiency. In practice, the proposed scheme is fulfilled to sample and recover frames in video sequences. The experimental results demonstrate that the proposed video sampling behaves better performance in sampling and recovery than the classical CS.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"191 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123533287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unsupervised color classifier training for soccer player detection 足球运动员检测的无监督颜色分类器训练
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706424
S. Gerke, Shiwang Singh, A. Linnemann, P. Ndjiki-Nya
{"title":"Unsupervised color classifier training for soccer player detection","authors":"S. Gerke, Shiwang Singh, A. Linnemann, P. Ndjiki-Nya","doi":"10.1109/VCIP.2013.6706424","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706424","url":null,"abstract":"Player detection in sports video is a challenging task: In contrast to typical surveillance applications, a pan-tilt-zoom camera model is used. Therefore, simple background learning approaches cannot be used. Furthermore, camera motion causes severe motion blur, making gradient based approaches less robust than in settings where the camera is static. The contribution of this paper is a sequence adaptive approach that utilizes color information in an unsupervised manner to improve detection accuracy. Therefore, different color features, namely color histograms, color spatiograms and a color and edge directivity descriptor are evaluated. It is shown that the proposed color adaptive approach improves detection accuracy. In terms of maximum F1 score, an improvement from 0.79 to 0.81 is reached using block-wise HSV histograms. The average number of false positives per image (FPPI) at two fixed recall levels decreased by approximately 23%.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124929135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Multi-model prediction for image set compression 图像集压缩的多模型预测
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706334
Zhongbo Shi, Xiaoyan Sun, Feng Wu
{"title":"Multi-model prediction for image set compression","authors":"Zhongbo Shi, Xiaoyan Sun, Feng Wu","doi":"10.1109/VCIP.2013.6706334","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706334","url":null,"abstract":"The key task in image set compression is how to efficiently remove set redundancy among images and within a single image. In this paper, we propose the first multi-model prediction (MoP) method for image set compression to significantly reduce inter image redundancy. Unlike the previous prediction methods, our MoP enhances the correlation between images using feature-based geometric multi-model fitting. Based on estimated geometric models, multiple deformed prediction images are generated to reduce geometric distortions in different image regions. The block-based adaptive motion compensation is then adopted to further eliminate local variances. Experimental results demonstrate the advantage of our approach, especially for images with complicated scenes and geometric relationships.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127850529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
An adaptive texture-depth rate allocation estimation technique for low latency multi-view video plus depth transmission 一种低延迟多视点视频加深度传输的自适应纹理深度率分配估计技术
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706332
M. Cordina, C. J. Debono
{"title":"An adaptive texture-depth rate allocation estimation technique for low latency multi-view video plus depth transmission","authors":"M. Cordina, C. J. Debono","doi":"10.1109/VCIP.2013.6706332","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706332","url":null,"abstract":"This paper presents an adaptive texture-depth target bit rate allocation estimation technique for low latency multi-view video plus depth transmission using a multi-regression model. The proposed technique employs the prediction mode distribution of the macroblocks at the discontinuity regions of the depth map video to estimate the optimal texture-depth target bit rate allocation considering the total available bit rate. This technique was tested using various standard test sequences and has shown efficacy as the model is able to estimate, in real-time, the optimal texture-depth rate allocation with an absolute mean estimation error of 2.5% and a standard deviation of 2.2%. Moreover, it allows the texture-depth rate allocation to be adapted to the video sequence with good tracking performance, allowing the correct handling of scene changes.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116432004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Joint video/depth/FEC rate allocation with considering 3D visual saliency for scalable 3D video streaming 联合视频/深度/FEC速率分配,考虑可扩展3D视频流的3D视觉显着性
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706339
Yanwei Liu, Jinxia Liu, S. Ci, Yun Ye
{"title":"Joint video/depth/FEC rate allocation with considering 3D visual saliency for scalable 3D video streaming","authors":"Yanwei Liu, Jinxia Liu, S. Ci, Yun Ye","doi":"10.1109/VCIP.2013.6706339","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706339","url":null,"abstract":"For robust video plus depth based 3D video streaming, video, depth and packet-level forward error correction (FEC) can provide many rate combinations with various 3D visual qualities to adapt to the dynamic channel conditions. Video/depth/FEC rate allocation under the channel bandwidth constraint is an important optimization problem for robust 3D video streaming. This paper proposes a joint video/depth/FEC rate allocation method by maximizing the receiver's 3D visual quality. Through predicting the perceptual 3D visual qualities of the different video/depth/FEC rate combinations, the optimal GOP-level video/depth/FEC rate combination can be found. Further, the selected FEC rates are unequally assigned to different levels of 3D saliency regions within each video/depth frame. The effectiveness of the proposed 3D saliency based joint video/depth/FEC rate allocation method for scalable 3D video streaming is validated by extensive experimental results.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125598265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A bank of fast matched filters by decomposing the filter kernel 通过分解滤波器核得到一组快速匹配的滤波器
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706434
Mihails Pudzs, Rihards Fuksis, M. Greitans, Teodors Eglitis
{"title":"A bank of fast matched filters by decomposing the filter kernel","authors":"Mihails Pudzs, Rihards Fuksis, M. Greitans, Teodors Eglitis","doi":"10.1109/VCIP.2013.6706434","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706434","url":null,"abstract":"In this paper we introduce a bank of fast matched filters that are designed to extract gradients, edges, lines and various line crossings. Our work is based on previously introduced filtering approaches like conventional Matched Filtering (MF), Complex Matched Filtering (CMF) and Generalized Complex Matched Filtering (GCMF), and is aimed to speed up the image processing. Filter kernel decomposition method is demonstrated for the latter mentioned (GCMF) but can be similarly applied to any other filters (like MF, CMF, Gabor filters, spiculation filters, steerable MF, etc.) as well. By introducing the mask kernel approximation, we show how to substitute the GCMF with several more computationally efficient filters, which reduce the overall computation complexity by over hundred of times. Acquired Fast GCMF retains all of the functionality of GCMF (extracts the desired objects and obtains their angular orientation), losing in accuracy only about +26 dB in terms of PSNR.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133947618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptual grouping via untangling Gestalt principles 通过解缠格式塔原则的知觉分组
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706384
Yonggang Qi, Jun Guo, Yi Li, Honggang Zhang, T. Xiang, Yi-Zhe Song, Z. Tan
{"title":"Perceptual grouping via untangling Gestalt principles","authors":"Yonggang Qi, Jun Guo, Yi Li, Honggang Zhang, T. Xiang, Yi-Zhe Song, Z. Tan","doi":"10.1109/VCIP.2013.6706384","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706384","url":null,"abstract":"Gestalt principles, a set of conjoining rules derived from human visual studies, have been known to play an important role in computer vision. Many applications such as image segmentation, contour grouping and scene understanding often rely on such rules to work. However, the problem of Gestalt confliction, i.e., the relative importance of each rule compared with another, remains unsolved. In this paper, we investigate the problem of perceptual grouping by quantifying the confliction among three commonly used rules: similarity, continuity and proximity. More specifically, we propose to quantify the importance of Gestalt rules by solving a learning to rank problem, and formulate a multi-label graph-cuts algorithm to group image primitives while taking into account the learned Gestalt confliction. Our experiment results confirm the existence of Gestalt confliction in perceptual grouping and demonstrate an improved performance when such a confliction is accounted for via the proposed grouping algorithm. Finally, a novel cross domain image classification method is proposed by exploiting perceptual grouping as representation.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"26 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133389662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Lossless predictive coding with Bayesian treatment 采用贝叶斯处理的无损预测编码
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706328
Jing Liu, Xiaokang Yang, Guangtao Zhai, Li Chen, Xianghui Sun, Wanhong Chen, Ying Zuo
{"title":"Lossless predictive coding with Bayesian treatment","authors":"Jing Liu, Xiaokang Yang, Guangtao Zhai, Li Chen, Xianghui Sun, Wanhong Chen, Ying Zuo","doi":"10.1109/VCIP.2013.6706328","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706328","url":null,"abstract":"Natural image statistics have been widely exploited for lossless predictive coding and other applications. However, traditional adaptive techniques always focus on the local consistency of training set regardless of what the predicted target looks like. We investigate the problem of introducing the model evidence of predicted target since self-similarity inherent in natural images gives some kind of prior information for the distribution of predicted result. The proposed Bayesian model integrated with both training evidence and target evidence takes full advantages of local structure as well as self-similarity. Experimental results demonstrate that the proposed context model achieves best results compared with the state-of-the-art lossless predictors.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133402841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Accurate 3D reconstruction of dynamic scenes with Fourier transform assisted phase shifting 傅里叶变换辅助相移的动态场景精确三维重建
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706399
Pengyu Cong, Yueyi Zhang, Zhiwei Xiong, Shenghui Zhao, Feng Wu
{"title":"Accurate 3D reconstruction of dynamic scenes with Fourier transform assisted phase shifting","authors":"Pengyu Cong, Yueyi Zhang, Zhiwei Xiong, Shenghui Zhao, Feng Wu","doi":"10.1109/VCIP.2013.6706399","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706399","url":null,"abstract":"Phase shifting is a widely used method for accurate and dense 3D reconstruction. However, at least three images of the same scene are required for each reconstruction, so measurement errors are inevitable in dynamic scenes, even with high-speed hardware. In this paper, we propose a Fourier transform assisted phase shifting method to overcome the motion vulnerability in phase shifting. A new model with motion-related phase shifts is formulated, and the coarse phase measurements obtained by Fourier transform profilemetry are used to estimate the unknown phase shifts. The phase errors caused by motion are greatly reduced in this way. Experimental results show that the proposed method can obtain accurate and dense 3D reconstruction of dynamic scenes, with regard to different kinds of motion.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129358096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Entropy of primitive: A top-down methodology for evaluating the perceptual visual information 原语熵:一种自顶向下评估感性视觉信息的方法
2013 Visual Communications and Image Processing (VCIP) Pub Date : 2013-11-01 DOI: 10.1109/VCIP.2013.6706358
Xianguo Zhang, Shiqi Wang, Siwei Ma, Shaohui Liu, Wen Gao
{"title":"Entropy of primitive: A top-down methodology for evaluating the perceptual visual information","authors":"Xianguo Zhang, Shiqi Wang, Siwei Ma, Shaohui Liu, Wen Gao","doi":"10.1109/VCIP.2013.6706358","DOIUrl":"https://doi.org/10.1109/VCIP.2013.6706358","url":null,"abstract":"In this paper, we aim at evaluating the perceptual visual information based on a novel top-down methodology: entropy of primitive (EoP). The EoP is determined by the distribution of the atoms in describing an image, and is demonstrated to exhibit closely correlation with the perceptual image quality. Based on the visual information evaluation, we further demonstrate that the EoP is effective in predicting the perceptual lossless of natural images. Inspired by this observation, in order to distinguish whether the loss of input signal is visual noticeable to human visual system (HVS), we introduce the EoP based perceptual lossless profile (PLP). Extensive experiments verify that, the proposed EoP based perceptual lossless profile can efficiently measure the minimum noticeable visual information distortion and achieve better performance compared to the-state-of-the-art just-noticeable difference (JND) profile.","PeriodicalId":407080,"journal":{"name":"2013 Visual Communications and Image Processing (VCIP)","volume":"29 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132407395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信