2010 IEEE International Workshop on Multimedia Signal Processing最新文献

筛选
英文 中文
Depth-aided image inpainting for novel view synthesis 用于新型视图合成的深度辅助图像绘制
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662013
Ismaël Daribo, B. Pesquet-Popescu
{"title":"Depth-aided image inpainting for novel view synthesis","authors":"Ismaël Daribo, B. Pesquet-Popescu","doi":"10.1109/MMSP.2010.5662013","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662013","url":null,"abstract":"Depth Image Based Rendering (DIBR) technique has been recognized as a promising tool for supporting advanced 3D video services required in MultiView Video (MVV) systems. However, an inherent problem with DIBR is to fill newly exposed areas (holes) caused by disocclusions. This paper addresses the disocclusion problem. To deal with small disocclusions, hole-filling strategies have been designed by the state-of-the-art through pre-processing techniques of the depth video. For larger disocclusions, where depth pre-processing has some limitations, we propose an inpainting approach to retrieve missing pixels. Specifically, we propose in the texture and structure propagation process to take into account the depth information by distinguishing foreground and background parts of the scene. Experimental results illustrate the efficiency of the proposed method.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127185437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 151
Multistage compressed-sensing reconstruction of multiview images 多视角图像的多级压缩感知重构
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662003
M. Trocan, Thomas Maugey, Eric W. Tramel, J. Fowler, B. Pesquet-Popescu
{"title":"Multistage compressed-sensing reconstruction of multiview images","authors":"M. Trocan, Thomas Maugey, Eric W. Tramel, J. Fowler, B. Pesquet-Popescu","doi":"10.1109/MMSP.2010.5662003","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662003","url":null,"abstract":"Compressed sensing is applied to multiview image sets and the high degree of correlation between views is exploited to enhance recovery performance over straightforward independent view recovery. This gain in performance is obtained by recovering the difference between a set of acquired measurements and the projection of a prediction of the signal they represent. The recovered difference is then added back to the prediction, and the prediction and recovery procedure is repeated in an iterated fashion for each of the views in the multiview image set. The recovered multiview image set is then used as an initialization to repeat the entire process again to form a multistage refinement. Experimental results reveal substantial performance gains from the multistage reconstruction.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131688202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Robust foreground segmentation for GPU architecture in an immersive 3D videoconferencing system 沉浸式3D视频会议系统中GPU架构的鲁棒前景分割
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5661997
J. Civit, Ò. Escoda
{"title":"Robust foreground segmentation for GPU architecture in an immersive 3D videoconferencing system","authors":"J. Civit, Ò. Escoda","doi":"10.1109/MMSP.2010.5661997","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5661997","url":null,"abstract":"Current telepresence systems, while being a great step forward in videoconferencing, still have important points to improve in what eye-contact, gaze and gesture awareness concerns. Many-to-many communications are going to greatly benefit from mature auto-stereoscopic 3D technology; allowing people to engage more natural remote meetings, with proper eye-contact and better spatiality feeling. For this purpose, proper real-time multi-perspective 3D video capture is necessary (often based on one or more View+Depth data sets). Given current state of the art, some sort of foreground segmentation is often necessary at the acquisition in order to generate 3D depth maps with hight enough resolution and accurate object boundaries. For this, one needs flicker-less foreground segmentations, accurate to borders, resilient to noise and foreground shade changes, and able to operate in real-time on performing architectures such as GPGPUs. This paper introduces a robust Foreground Segmentation approach used within the experimental immersive 3D Telepresence system from EU-FP7 3DPresence project. The proposed algorithm is based on a costs minimization using Hierarchical Believe Propagation and outliers reduction by regularization on oversegmented regions. The iterative nature of the approach makes it scalable in complexity, allowing it to increase accuracy and picture size capacity as GPGPUs become faster. In this work, particular care in the design of foreground and background cost models has also been taken in order to overcome limitations of previous work proposed in the literature.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"43 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131751620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Depth consistency testing for improved view interpolation 深度一致性测试改进的视图插值
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662051
P. Rana, M. Flierl
{"title":"Depth consistency testing for improved view interpolation","authors":"P. Rana, M. Flierl","doi":"10.1109/MMSP.2010.5662051","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662051","url":null,"abstract":"Multiview video will play a pivotal role in the next generation visual communication media services like three-dimensional (3D) television and free-viewpoint television. These advanced media services provide natural 3D impressions and enable viewers to move freely in a dynamic real world scene by changing the viewpoint. High quality virtual view interpolation is required to support free viewpoint viewing. Usually, depth maps of different viewpoints are used to reconstruct a novel view. As these depth maps are usually estimated individually by stereo-matching algorithms, they have very weak spatial consistency. The inconsistency of depth maps affects the quality of view interpolation. In this paper, we propose a method for depth consistency testing to improve view interpolation. The method addresses the problem by warping more than two depth maps from multiple reference viewpoints to the virtual viewpoint. We test the consistency among warped depth values and improve the depth value information of the virtual view. With that, we enhance the quality of the interpolated virtual view.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132102870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Optimizing the free distance of Error-Correcting Variable-Length Codes 纠错变长码自由距离的优化
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662027
A. Diallo, C. Weidmann, M. Kieffer
{"title":"Optimizing the free distance of Error-Correcting Variable-Length Codes","authors":"A. Diallo, C. Weidmann, M. Kieffer","doi":"10.1109/MMSP.2010.5662027","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662027","url":null,"abstract":"This paper considers the optimization of Error-Correcting Variable-Length Codes (EC-VLC), which are a class of joint-source channel codes. The aim is to find a prefix-free codebook with the largest possible free distance for a given set of codeword lengths, ℓ = (ℓ<inf>1</inf>, ℓ<inf>2</inf>, …, ℓ<inf>M</inf>). The proposed approach consists in ordering all possible codebooks associated to ℓ on a tree, and then to apply an efficient branch-and-prune algorithm to find a codebook with maximal free distance. Three methods for building the tree of codebooks are presented and their efficiency is compared.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Scalable-to-lossless transform domain distributed video coding 可缩放到无损变换域分布式视频编码
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662041
Xin Huang, Anna Ukhanova, A. Veselov, Søren Forchhammer, M. Gilmutdinov
{"title":"Scalable-to-lossless transform domain distributed video coding","authors":"Xin Huang, Anna Ukhanova, A. Veselov, Søren Forchhammer, M. Gilmutdinov","doi":"10.1109/MMSP.2010.5662041","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662041","url":null,"abstract":"Distributed video coding (DVC) is a novel approach providing new features as low complexity encoding by mainly exploiting the source statistics at the decoder based on the availability of decoder side information. In this paper, scalable-to-lossless DVC is presented based on extending a lossy Transform Domain Wyner-Ziv (TDWZ) distributed video codec with feedback. The lossless coding is obtained by using a reversible integer DCT. Experimental results show that the performance of the proposed scalable-to-lossless TDWZ video codec can outperform alternatives based on the JPEG 2000 standard. The TDWZ codec provides frame by frame encoding. Comparing the lossless coding efficiency, the proposed scalable-to-lossless TDWZ video codec can save up to 5%-13% bits compared to JPEG LS and H.264 Intra frame lossless coding and do so as a scalable-to-lossless coding.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"160 9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129024354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Error concealment considering error propagation inside a frame 考虑帧内错误传播的错误隐藏
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662053
Jun Wang, Yichun Tang, Hao Sun, S. Goto
{"title":"Error concealment considering error propagation inside a frame","authors":"Jun Wang, Yichun Tang, Hao Sun, S. Goto","doi":"10.1109/MMSP.2010.5662053","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662053","url":null,"abstract":"Transmission of compressed video over error prone channels may result in packet losses or errors, which can significantly degrade the image quality. Such degradation even becomes worse in 1Seg video broadcasting, which is widely used in Japan and Brazil for mobile phone TV service recently, where errors are drastically increased and lost areas are contiguous. Therefore the errors in earlier concealed MBs (macro blocks) may propagate to the MBs later to be concealed inside the same frame (spatial domain). The error concealment (EC) is used to recover the lost data by the redundancy in videos. Aiming at spatial error propagation (SEP) reduction, this paper proposes a SEP reduction based EC (SEPEC). In SEPEC, besides the mismatch distortion in current MB, the potential propagated mismatch distortion in the following to be concealed MBs is also minimized. Also, 2 extensions of SEPEC, that SEPEC with refined search and SEPEC with multiple layer match are discussed. Compared with previous work, the experiments show SEPEC achieves much better performance of video recovery and excellent trade-off between quality and computation in 1Seg broadcasting in terms of computation cost.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"140 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117040068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Optimized decomposition basis using Lanczos filters for lossless compression of biomedical images 利用Lanczos滤波器优化分解基础,对生物医学图像进行无损压缩
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662005
Jonathan Taquet, C. Labit
{"title":"Optimized decomposition basis using Lanczos filters for lossless compression of biomedical images","authors":"Jonathan Taquet, C. Labit","doi":"10.1109/MMSP.2010.5662005","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662005","url":null,"abstract":"This paper proposes to introduce Lanczos interpolation filters as wavelet atoms in an optimized decomposition for embedded lossy to lossless compression of biomedical images. The decomposition and the Lanczos parameter are jointly optimized in a generic packet structure in order to take into account the various contents of biomedical imaging modalities. Lossless experimental results are given on a large scale database. They show that in comparison with a well known basis using 5/3 biorthogonal wavelets and a dyadic decomposition, the proposed approach allows to improve the compression by more than 10% on less noisy images and up to 30% on 3D-MRI while providing similar results on noisy datasets.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122568373","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Motion vector forecast and mapping (MV-FMap) method for entropy coding based video coders 基于熵编码的视频编码器的运动矢量预测和映射(MV-FMap)方法
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5662020
J. L. Tanou, Jean-Marc Thiesse, Joël Jung, M. Antonini
{"title":"Motion vector forecast and mapping (MV-FMap) method for entropy coding based video coders","authors":"J. L. Tanou, Jean-Marc Thiesse, Joël Jung, M. Antonini","doi":"10.1109/MMSP.2010.5662020","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5662020","url":null,"abstract":"Since the finalization of the H.264/AVC standard and in order to meet the target set by both ITU-T and MPEG to define a new standard that reaches 50% bit rate reduction compared to H.264/AVC, many tools have efficiently improved the texture coding and the motion compensation accuracy. These improvements have resulted in increasing the proportion of bit rate allocated to motion information. Thus, the bit rate reduction of this information becomes a key subject of research. This paper proposes a method for motion vector coding based on an adaptive redistribution of motion vector residuals before entropy coding. Motion information is gathered to forecast a list of motion vector residuals which are redistributed to unexpected residuals of lower coding cost. Compared to H.264/AVC, this scheme provides systematic gain on tested sequences, and 2.3% in average, reaching up to 4.9% for a given sequence.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122810125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comparative study between different pre-whitening decorrelation based acoustic feedback cancellers 基于预白化去相关的不同声反馈消除器的比较研究
2010 IEEE International Workshop on Multimedia Signal Processing Pub Date : 2010-12-10 DOI: 10.1109/MMSP.2010.5661984
K. Essafi, S. B. Jebara
{"title":"A comparative study between different pre-whitening decorrelation based acoustic feedback cancellers","authors":"K. Essafi, S. B. Jebara","doi":"10.1109/MMSP.2010.5661984","DOIUrl":"https://doi.org/10.1109/MMSP.2010.5661984","url":null,"abstract":"The use of an adaptive feedback canceller (AFC) based on signals pre-whitening/filtering in hearing aids seems very attractive, since it limits desired signal degradation when amplification gain is increased. In this paper, we present a comparative assessment of the performances of some existing pre-whitening decorrelation based methods. The used criteria consider the adaptive filter performances, the system stability and the speech quality in terms of distortion and oscillation. Results show that the method including the loudspeaker pre-whitening and the microphone filtering is the best. Moreover, the use of an adequate algorithm for adaptive pre-whitener based on the minimization of a criterion which considers the inter-correlation between the pre-whitened loudspeaker and the filtered microphone signals improves more the performances.","PeriodicalId":105774,"journal":{"name":"2010 IEEE International Workshop on Multimedia Signal Processing","volume":"34 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114111271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信