2014 IEEE Visual Communications and Image Processing Conference最新文献

筛选
英文 中文
A novel objective quality assessment method for perceptual video coding in conversational scenarios 会话场景下感知视频编码的一种新的客观质量评估方法
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051496
Mai Xu, Jingze Zhang, Yuan Ma, Zulin Wang
{"title":"A novel objective quality assessment method for perceptual video coding in conversational scenarios","authors":"Mai Xu, Jingze Zhang, Yuan Ma, Zulin Wang","doi":"10.1109/VCIP.2014.7051496","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051496","url":null,"abstract":"Recently, numerous perceptual video coding approaches have been proposed to use face as ROI regions, for improving perceived visual quality of compressed conversational videos. However, there exists no objective metric, specialized for efficiently evaluating the perceived visual quality of compressed conversational videos. This paper thus proposes an efficient objective quality assessment method, namely Gaussian mixture model based PSNR (GMM-PSNR), for conversational videos. First, eye tracking experiments, together with a face extraction technique, were carried out to identify importance of the regions of background, face, and facial features, through eye fixation points. Next, assuming that the distribution of some eye fixation points obeys Gaussian mixture model, an importance weight map is generated by introducing a new term, eye fixation points/pixel(efp/p). Finally, GMM-PSNR is computed by assigning different penalties to the distortion of each pixel in a video frame, according to the generated weight map. The experimental results show the effectiveness of our GMM-PSNR by investigating its correlation with subjective quality on several test video sequences.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121196149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sample adaptive offset in AVS2 video standard AVS2视频标准中的样本自适应偏移
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051506
Jing Chen, Sunil Lee, E. Alshina, Yinji Piao
{"title":"Sample adaptive offset in AVS2 video standard","authors":"Jing Chen, Sunil Lee, E. Alshina, Yinji Piao","doi":"10.1109/VCIP.2014.7051506","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051506","url":null,"abstract":"AVS2 video standard is the next-generation video coding standard under the development of Audio Video coding Standard (AVS) workgroup of China. In this paper, the design of Sample Adaptive Offset (SAO) in AVS2 is presented. Considering the implementation issues, a shifted structure in which the SAO parameter region is shifted from the Largest Coding Unit (LCU) to the upper-left is adopted to make the SAO parameter region consistent with the processing region in implementation. Moreover, the category dependent offset is introduced in the edge type based on the statistical results to improve the offset coding and non-consecutive offset bands are adopted in the band type to optimize offset bands. The test results show that SAO achieves on average 0.3% to 1.4% luma coding gain in AVS2 common test conditions.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122445993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast algorithm of coding unit depth decision for HEVC intra coding HEVC内编码中编码单元深度的快速判定算法
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051605
Xiaofeng Huang, Huizhu Jia, Kaijin Wei, Jie Liu, Chuang Zhu, Zhengguang Lv, Don Xie
{"title":"Fast algorithm of coding unit depth decision for HEVC intra coding","authors":"Xiaofeng Huang, Huizhu Jia, Kaijin Wei, Jie Liu, Chuang Zhu, Zhengguang Lv, Don Xie","doi":"10.1109/VCIP.2014.7051605","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051605","url":null,"abstract":"The emerging high efficiency video coding standard (HEVC) achieves significantly better coding efficiency than all existing video coding standards. The quad tree structured coding unit (CU) is adopted in HEVC to improve the compression efficiency, but this causes a very high computational complexity because it exhausts all the combinations of the prediction unit (PU) and transform unit (TU) in every CU attempt. In order to alleviate the computational burden in HEVC intra coding, a fast CU depth decision algorithm is proposed in this paper. The CU texture complexity and the correlation between the current CU and neighbouring CUs are adaptively taken into consideration for the decision of the CU split and the CU depth search range. Experimental results show that the proposed scheme provides 39.3% encoder time savings on average compared to the default encoding scheme in HM-RExt-13.0 with only 0.6% BDBR penalty in coding performance.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133485906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm 深入了解反馈在模块跌落检测算法的跟踪回路中的作用
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051592
L. Boulard, E. Baccaglini, R. Scopigno
{"title":"Insights into the role of feedbacks in the tracking loop of a modular fall-detection algorithm","authors":"L. Boulard, E. Baccaglini, R. Scopigno","doi":"10.1109/VCIP.2014.7051592","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051592","url":null,"abstract":"In this paper we propose an innovative video-based architecture aimed to monitor elderly people. It is based on cheap devices and open-source libraries and preliminary tests demonstrate that it manages to achieve a significant performance. The overall architecture of the system and its implementation are shortly discussed from the point of view of the composing functional blocks, also analyzing the effects of loopbacks on the effectiveness of the algorithm.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"11 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114102632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fusion side information based on feature and motion extraction for distributed multiview video coding 分布式多视点视频编码中基于特征和运动提取的侧信息融合
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051594
Hui Yin, Mengyao Sun, Yumei Wang, Yu Liu
{"title":"Fusion side information based on feature and motion extraction for distributed multiview video coding","authors":"Hui Yin, Mengyao Sun, Yumei Wang, Yu Liu","doi":"10.1109/VCIP.2014.7051594","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051594","url":null,"abstract":"In distributed multiview video coding (DMVC), the quality of side information (SI) is crucial for decoding and the reconstruction of the Wyner-Ziv (WZ) frames. Generally, its quality is influenced by two main reasons. One reason is that the moving object of the WZ frames can be easily misestimated because of fast motion. The other is that the background around the moving object is also easily misestimated because of occlusion. According to these reasons, a novel SI fusion method is proposed which exploits different schemes to reconstruct different parts complementarity. Motion detection is performed to extract the moving object which can be predicted by utilizing both temporary correlations and spatial correlations. As for background around the moving object, temporary correlations are utilized to predict it. It is noteworthy that the prediction method used in this paper is based on a feature based global motion model. The experiment results show high precision quality of the SI of the WZ frames and significant improvement in rate distortion (RD) performance especially for the sequence with fast moving objects.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114269955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Accurate image noise level estimation by high order polynomial local surface approximation and statistical inference 采用高阶多项式局部表面近似和统计推断方法精确估计图像噪声水平
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051581
Tingting Kou, Lei Yang, Y. Wan
{"title":"Accurate image noise level estimation by high order polynomial local surface approximation and statistical inference","authors":"Tingting Kou, Lei Yang, Y. Wan","doi":"10.1109/VCIP.2014.7051581","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051581","url":null,"abstract":"Image noise level estimation is an important step in many image processing tasks such as denoising, compression and segmentation. Although recently proposed SVD and PCA approaches have produced the most accurate estimates so far, these linear subspace-based methods still suffer from signal contamination from the clean signal content, especially in the low noise situation. In addition, the common performance evaluation procedure currently in use treats test images as noise-free. This omits the noise already in those test images and invariably incurs a bias. In this paper we make two contributions. First, we propose a new noise level estimation method using nonlinear local surface approximation. In this method, we first approximate image noise-free content in each block using a high degree polynomial. Then the block residual variances, which follow chi squared distribution, are sorted and the upper quantile of a carefully chosen size is used for estimation. Secondly, we propose a new performance evaluation procedure that is free from the influence of the noise already present in the test images. Experimental results show that it has much improved performance than typical state-of-the-art methods in terms of both estimation accuracy and stability.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114723348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A proposed accelerated image copy-move forgery detection 提出了一种加速图像复制-移动伪造检测方法
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051552
Sondos M. Fadl, N. Semary
{"title":"A proposed accelerated image copy-move forgery detection","authors":"Sondos M. Fadl, N. Semary","doi":"10.1109/VCIP.2014.7051552","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051552","url":null,"abstract":"Image forgery detection is currently one of the interested research fields of image processing. Copy-Move (CM) forgery is one of the frequently used techniques. In this paper, we propose a method which is efficient and fast for detect copy-move regions. The proposed method accelerates block matching strategy. Firstly, the image is divided into fixed-size overlapping blocks then discrete cosine transform is applied to each block to represent its features. Fast k-means clustering technique is used to cluster the blocks into different classes. Zigzag scanning is performed to reduce the length of each block feature vector. The feature vectors of each cluster blocks are lexicographically sorted by radix sort, correlation between each nearby blocks indicates their similarity. The experimental results demonstrate that the proposed method can detect the duplicated regions efficiently, and reduce processing time up to 50% of other previous works.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114739927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Avoiding weak parameters in secret image sharing 在秘密图像共享中避免弱参数
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051617
M. Mohanty, C. Gehrmann, P. Atrey
{"title":"Avoiding weak parameters in secret image sharing","authors":"M. Mohanty, C. Gehrmann, P. Atrey","doi":"10.1109/VCIP.2014.7051617","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051617","url":null,"abstract":"Secret image sharing is a popular image hiding scheme that typically uses (3, 3, n) multi-secret sharing to hide the colors of a secret image. The use of (3, 3, n) multi-secret sharing, however, can lead to information loss. In this paper, we study this loss of information from an image perspective, and show that one-third of the color values of the secret image can be leaked when the sum of any two selected share numbers is equal to the considered prime number in the secret sharing. Furthermore, we show that if the selected share numbers do not satisfy this condition (for example, when the value of each of the selected share number is less than the half of the value of the prime number), then the colors of the secret image are not leaked. In this case, a noise-like image is reconstructed from the knowledge of less than three shares.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117156917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Scalar-quantization-based multi-layer data hiding for video coding applications 基于标量量化的视频编码多层数据隐藏
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051554
Alexey Filippov, Vasily Rufitskiy, V. Potapov
{"title":"Scalar-quantization-based multi-layer data hiding for video coding applications","authors":"Alexey Filippov, Vasily Rufitskiy, V. Potapov","doi":"10.1109/VCIP.2014.7051554","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051554","url":null,"abstract":"In this paper, we present a novel data-hiding method that does not interfere with other data-hiding techniques (e.g., sign bits hiding) that are already included into state-of-the-art coding standards such as HEVC/H.265. One of the main features that are inherent to the proposed technique is its orientation on hierarchically-structured units (e.g., a hierarchy in HEVC/H.265 that includes coding, prediction and transform units). As shown in the paper, this method provides higher coding gain when applied to scalar-quantized values. Finally, we present experimental results that confirm the high RD-performance of this technique in comparison with explicit signaling and discuss its suitability for HEVC-compatible watermarking.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117306691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A new convex optimization-based two-pass rate control method for object coding in AVS 基于凸优化的AVS目标编码双通过率控制新方法
2014 IEEE Visual Communications and Image Processing Conference Pub Date : 2014-12-01 DOI: 10.1109/VCIP.2014.7051627
X. Yao, S. Chan
{"title":"A new convex optimization-based two-pass rate control method for object coding in AVS","authors":"X. Yao, S. Chan","doi":"10.1109/VCIP.2014.7051627","DOIUrl":"https://doi.org/10.1109/VCIP.2014.7051627","url":null,"abstract":"This paper proposed a new convex-optimization-based two-pass rate control method for object coding of China's audio video coding standard (AVS). The algorithm adopts a two-pass methodology to overcome the important interdependency problem between rate control and rate distortion optimization. An exponential model is used to describe the rate-distortion behavior of the codec so as to perform frame-level and object-level rate control under the two-pass framework. Convex programming is utilized to solve for the resultant optimal bit allocation problem. Moreover, the region-of-interest (ROI) functionality is also realized at the object-level. The good performance and effectiveness of this method are illustrated using experimental results.","PeriodicalId":166978,"journal":{"name":"2014 IEEE Visual Communications and Image Processing Conference","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124189343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信