28th Picture Coding Symposium最新文献

筛选
英文 中文
Reducing bitrates of compressed video with enhanced view synthesis for FTV 通过增强FTV的视图合成来降低压缩视频的比特率
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702575
Lu Yang, M. O. Wildeboer, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto
{"title":"Reducing bitrates of compressed video with enhanced view synthesis for FTV","authors":"Lu Yang, M. O. Wildeboer, T. Yendo, M. P. Tehrani, T. Fujii, M. Tanimoto","doi":"10.1109/PCS.2010.5702575","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702575","url":null,"abstract":"View synthesis using depth maps is a well-known technique for exploiting the redundancy between multi-view videos. In this paper, we deal with the bitrates of view synthesis at the decoder side of FTV that would use compressed depth maps and views. Both inherent depth estimation error and coding distortion would degrade synthesis quality. The focus is to reduce bitrates required for generating the high-quality virtual view. We employ a reliable view synthesis method which is compared with standard MPEG view synthesis software. The experimental results show that the bitrates required for synthesizing high-quality virtual view could be reduced by utilizing our enhanced view synthesis technique to improve the PSNR at medium bitrates.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127575055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improved texture compression for S3TC 改进了S3TC的纹理压缩
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702515
Yifei Jiang, Dandan Huan
{"title":"Improved texture compression for S3TC","authors":"Yifei Jiang, Dandan Huan","doi":"10.1109/PCS.2010.5702515","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702515","url":null,"abstract":"Texture compression is a specialized form of still image compression employed in computer graphics systems to reduce memory bandwidth consumption. Modern texture compression schemes cannot generate satisfactory qualities for both alpha channel and color channel of texture images. We propose a novel texture compression scheme, named ImTC, based on the insight into the essential difference between transparency and color. ImTC defines new data formats and compresses the two channels flexibly. While keeping the same compression ratio as the de facto standard texture compression scheme, ImTC improves compression qualities of both channels. The average PSNR score of alpha channel is improved by about 0.2 dB, and that of color channel can be increased by 6.50 dB over a set of test images, which makes ImTC a better substitute for the standard scheme.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117262037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Medium-granularity computational complexity control for H.264/AVC H.264/AVC的中粒度计算复杂度控制
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702467
Xiang Li, M. Wien, J. Ohm
{"title":"Medium-granularity computational complexity control for H.264/AVC","authors":"Xiang Li, M. Wien, J. Ohm","doi":"10.1109/PCS.2010.5702467","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702467","url":null,"abstract":"Today, video applications on handheld devices become more and more popular. Due to limited computational capability of handheld devices, complexity constrained video coding draws much attention. In this paper, a medium-granularity computational complexity control (MGCC) is proposed for H.264/AVC. First, a large dynamic range in complexity is achieved by taking 16×16 motion estimation in a single reference frame as the basic computational unit. Then a high coding efficiency is obtained by an adaptive computation allocation at MB level. Simulations show that coarse-granularity methods cannot work when the normalized complexity is below 15%. In contrast, the proposed MGCC performs well even when the complexity is reduced to 8.8%. Moreover, an average gain of 0.3 dB over coarse-granularity methods in BD-PSNR is obtained for 11 sequences when the complexity is around 20%.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"229 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124532965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Bit allocation of vertices and colors for patch-based coding in time-varying meshes 时变网格中基于补丁编码的顶点和颜色的位分配
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702449
T. Yamasaki, K. Aizawa
{"title":"Bit allocation of vertices and colors for patch-based coding in time-varying meshes","authors":"T. Yamasaki, K. Aizawa","doi":"10.1109/PCS.2010.5702449","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702449","url":null,"abstract":"This paper discusses bit-rate assignments for vertices, color, reference frames, and target frames in the patch-based compression method for time-varying meshes (TVMs). TVMs are nonisomorphic 3D mesh sequences of the real-world objects generated from multiview images. Experimental results demonstrate that the bit rate for vertices greatly affects the visual quality of the rendered 3D model, whereas the bit rate for color does not contribute to quality improvement. Therefore, as many bits as possible should be assigned to vertices, with 8–10 bits per vertex (bpv) per frame being sufficient for color. For interframe coding, the visual quality is improved in proportion to the bit rate of both vertices and color. However, it is demonstrated that the use of fewer bits (5∼6 bpv) is sufficient to achieve a visual quality that matches the intraframe visual quality.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123220500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
On the duality of rate allocation and quality indices 论费率分配与质量指标的二元性
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702484
T. Richter
{"title":"On the duality of rate allocation and quality indices","authors":"T. Richter","doi":"10.1109/PCS.2010.5702484","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702484","url":null,"abstract":"In a recent work [16], the author proposed to study the performance of still image quality indices such as the SSIM by using them as objective function of rate allocation algorithms. The outcome of that work was not only a multi-scale SSIM optimal JPEG 2000 implementation, but also a first-order approximation of the MS-SSIM that is surprisingly similar to more traditional contrast-sensitivity and visual masking based approaches. It will be seen in this work that the only difference between the latter works and the MS-SSIM index is the choice of the exponent of the masking term, and furthermore, that a slight modification of the SSIM definition reproducing the traditional exponent is able to improve the performance of the index at or below the visual threshold. It is hence demonstrated that the duality of quality indices and rate allocation helps to improve both the visual performance of the compression codec and the performance of the index.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130777349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A real-time system of distributed video coding 分布式视频编码的实时系统
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702557
K. Sakomizu, T. Yamasaki, Satoshi Nakagawa, T. Nishi
{"title":"A real-time system of distributed video coding","authors":"K. Sakomizu, T. Yamasaki, Satoshi Nakagawa, T. Nishi","doi":"10.1109/PCS.2010.5702557","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702557","url":null,"abstract":"This paper presents a real-time system of distributed video coding (DVC). DVC is a current video compression paradigm. The decoding process of DVC is normally complex, which causes difficulty in real-time implementation. To address this problem, we propose a new configuration of DVC with three methods: simple rate control without the feedback channel, simple transmitting of dynamic range and simple bidirectional motion estimation to reduce complexity. Then we implement the system with parallelization techniques. We also develop the encoder for a low power processor. Experimental results show that the encoder on i.MX31 400 MHz could operates at about CIF 13 fps, and the decoder on Core 2 Quad 2.83 GHz operates at more than CIF 30 fps.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128721470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An improved Wyner-Ziv video coding with feedback channel 改进的带有反馈通道的Wyner-Ziv视频编码
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702468
Feng Ye, Aidong Men, Bo Yang, Manman Fan, Kan Chang
{"title":"An improved Wyner-Ziv video coding with feedback channel","authors":"Feng Ye, Aidong Men, Bo Yang, Manman Fan, Kan Chang","doi":"10.1109/PCS.2010.5702468","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702468","url":null,"abstract":"This paper presents an improved feedback-assisted low complexity WZVC scheme. The performance of this scheme is improved by two enhancements: an improved mode-based key frame encoding and a 3DRS-assisted (three-dimensional recursive search assisted) motion estimation algorithm for WZ encoding. Experimental results show that our coding scheme can achieve significant gain compared to state-oft he-art TDWZ codec while still low encoding complexity.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126833012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Dictionary learning-based distributed compressive video sensing 基于字典学习的分布式压缩视频感知
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702466
Hung-Wei Chen, Li-Wei Kang, Chun-Shien Lu
{"title":"Dictionary learning-based distributed compressive video sensing","authors":"Hung-Wei Chen, Li-Wei Kang, Chun-Shien Lu","doi":"10.1109/PCS.2010.5702466","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702466","url":null,"abstract":"We address an important issue of fully low-cost and low-complex video compression for use in resource-extremely limited sensors/devices. Conventional motion estimation-based video compression or distributed video coding (DVC) techniques all rely on the high-cost mechanism, namely, sensing/sampling and compression are disjointedly performed, resulting in unnecessary consumption of resources. That is, most acquired raw video data will be discarded in the (possibly) complex compression stage. In this paper, we propose a dictionary learning-based distributed compressive video sensing (DCVS) framework to “directly” acquire compressed video data. Embedded in the compressive sensing (CS)-based single-pixel camera architecture, DCVS can compressively sense each video frame in a distributed manner. At DCVS decoder, video reconstruction can be formulated as an l1-minimization problem via solving the sparse coefficients with respect to some basis functions. We investigate adaptive dictionary/basis learning for each frame based on the training samples extracted from previous reconstructed neighboring frames and argue that much better basis can be obtained to represent the frame, compared to fixed basis-based representation and recent popular “CS-based DVC” approaches without relying on dictionary learning.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126350243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 49
3-D video coding using depth transition data 基于深度过渡数据的三维视频编码
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702453
Woo-Shik Kim, Antonio Ortega, Jaejoon Lee, H. Wey
{"title":"3-D video coding using depth transition data","authors":"Woo-Shik Kim, Antonio Ortega, Jaejoon Lee, H. Wey","doi":"10.1109/PCS.2010.5702453","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702453","url":null,"abstract":"The objective of this work is to develop a new 3-D video coding system which can provide better coding efficiency with improved subjective quality as compared to existing 3-D video systems. We have analyzed the distortions that occur in rendered views generated using depth image based rendering (DIBR) and classified them in order to evaluate their impact on subjective quality. As a result, we found that depth map coding distortion leads to “erosion artifacts” at object boundaries, which lead to significant degradation in perceptual quality. To solve this problem, we propose a solution in which depth transition data is encoded and transmitted to the decoder. Depth transition data for a given pixel indicates the camera position for which this pixel's depth will change. A main reason to consider transmitting explicitly this information is that it can be used to improve view interpolation at many different intermediate camera positions. Simulation results show that the subjective quality can be significantly improved by reducing the effect of erosion artifacts, using our proposed depth transition data. Maximum PSNR gains of about 0.5 dB can also be observed.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124260549","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Inter prediction based on spatio-temporal adaptive localized learning model 基于时空自适应局部学习模型的内部预测
28th Picture Coding Symposium Pub Date : 2010-12-01 DOI: 10.1109/PCS.2010.5702459
Hao Chen, R. Hu, Zhongyuan Wang, Rui Zhong
{"title":"Inter prediction based on spatio-temporal adaptive localized learning model","authors":"Hao Chen, R. Hu, Zhongyuan Wang, Rui Zhong","doi":"10.1109/PCS.2010.5702459","DOIUrl":"https://doi.org/10.1109/PCS.2010.5702459","url":null,"abstract":"Inter prediction based on block matching motion estimation is important for video coding. But this method suffers from the additional overhead in data rate representing the motion information that needs to be transmitted to the decoder. To solve this problem, we present an improved implicit motion information inter prediction algorithm for P slice in H.264/AVC based on the spatio-temporal adaptive localized learning (STALL) model. According to 4 × 4 block transform structure in H.264/AVC, we first adaptively choose nine spatial neighbors and nine temporal neighbors, and a localized 3D casual cube is designed as training window. By using these information, the model parameters could be adaptively computed based on the Least Square Prediction (LSP) method. Finally, we add a new inter prediction mode into H.264/AVC standard for P slice. The experimental results show that our algorithm improves encoding efficiency compared with H.264/AVC standard, with relatively increases in complexity.","PeriodicalId":255142,"journal":{"name":"28th Picture Coding Symposium","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117350256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信