2013 IEEE International Conference on Image Processing最新文献

筛选
英文 中文
Reconstruction of depth and normals from interreflections 从互反射中重建深度和法线
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738527
Binh-Son Hua, T. Ng, Kok-Lim Low
{"title":"Reconstruction of depth and normals from interreflections","authors":"Binh-Son Hua, T. Ng, Kok-Lim Low","doi":"10.1109/ICIP.2013.6738527","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738527","url":null,"abstract":"While geometry reconstruction has been extensively studied, several shortcomings still exist. First, traditional geometry reconstruction methods such as geometric or photometric stereo only recover either surface depth or normals. Second, such methods require calibration. Third, such methods cannot recover accurate geometry in the presence of interreflections. In order to address these problems in a single system, we propose an approach to reconstruct geometry from light transport data. Specifically, we investigate the problem of geometry reconstruction from interreflections in a light transport matrix. We show that by solving a system of polynomial equations derived directly from the interreflection matrix, both surface depth and normals can be fully reconstructed. Our system does not require projector-camera calibration, but only make use of a calibration object such as a checkerboard in the scene to pre-determine a few known points to simplify the polynomial solver. Our experimental results show that our system is able to reconstruct accurate geometry from interreflections up to a certain noise level. Our system is easy to set up in practice.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130427073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph-based rotation of the DCT basis for motion-adaptive transforms 基于图的旋转DCT基的运动自适应变换
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738371
Du Liu, M. Flierl
{"title":"Graph-based rotation of the DCT basis for motion-adaptive transforms","authors":"Du Liu, M. Flierl","doi":"10.1109/ICIP.2013.6738371","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738371","url":null,"abstract":"In this paper, we consider motion-adaptive transforms that are based on vertex-weighted graphs. The graphs are constructed by motion vector information and the weights of the vertices are given by scale factors, where the scale factors are used to control the energy compaction of the transform. The vertex-weighted graph defines a one dimensional linear subspace. Thus, our transform basis is subspace constrained. To find a full transform matrix that satisfies our subspace constraint, we rotate the discrete cosine transform (DCT) basis such that the first basis vector matches the subspace constraint. Since rotation is not unique in high dimensions, we choose a simple rotation that only rotates the DCT basis in the plane which is spanned by the first basis vector of the DCT and the subspace constraint. Experimental results on energy compaction show that the motion-adaptive transform based on this rotation is better than the motion-compensated orthogonal transform based on hierarchical decomposition while sharing the same first basis vector.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"89 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134133310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Synthetic training in object detection 目标检测的综合训练
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738641
Osama Khalil, Mohammed E. Fathy, D. Kholy, M. El-Saban, Pushmeet Kohli, J. Shotton, Yasmine Badr
{"title":"Synthetic training in object detection","authors":"Osama Khalil, Mohammed E. Fathy, D. Kholy, M. El-Saban, Pushmeet Kohli, J. Shotton, Yasmine Badr","doi":"10.1109/ICIP.2013.6738641","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738641","url":null,"abstract":"We introduce new approaches for augmenting annotated training datasets used for object detection tasks that serve achieving two goals: reduce the effort needed for collecting and manually annotating huge datasets and introduce novel variations to the initial dataset that help the learning algorithms. The methods presented in this work aim at relocating objects using their segmentation masks to new backgrounds. These variations comprise changes in properties of objects such as spatial location in the image, surrounding context and scale. We propose a model selection approach to arbitrate between the constructed model on a per class basis. Experimental results show gains that can be harvested using the proposed approach.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134452468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An optimally complexity scalable multi-mode decision algorithm for HEVC HEVC的最优复杂度可扩展多模式决策算法
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738412
Yihao Zhang, Shichao Huang, Huang Li, Hongyang Chao
{"title":"An optimally complexity scalable multi-mode decision algorithm for HEVC","authors":"Yihao Zhang, Shichao Huang, Huang Li, Hongyang Chao","doi":"10.1109/ICIP.2013.6738412","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738412","url":null,"abstract":"Quad-tree based Coding Unit structure in HEVC provides more motion compensation sizes to improve rate-distortion performance at the cost of greatly increased computational complexity. Different from other researches on fast algorithms, we develop an optimally complexity scalable multi-mode decision algorithm (OCSMD) for HEVC. There are two major contributions in this paper. The first one is a novel feature proposed to describe the relationship between MV field and CU depth. The second is that we build a cost-performance priority predicting model in frame level based on the feature with negligible overhead as well as no conflict with the standard. Our method may allocate computational resources to the MD of all the CUs in frame level under arbitrary complexity constraints, while obtaining nearly optimal coding performance. The experimental result shows that our algorithm can adjust complexity under varying computing capacity while achieving near-optimal R-D performance.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133866463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Restricted Boltzmann machine approach to couple dictionary training for image super-resolution 基于受限玻尔兹曼机的图像超分辨率耦合字典训练
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738103
Junbin Gao, Yi Guo, Ming Yin
{"title":"Restricted Boltzmann machine approach to couple dictionary training for image super-resolution","authors":"Junbin Gao, Yi Guo, Ming Yin","doi":"10.1109/ICIP.2013.6738103","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738103","url":null,"abstract":"Image super-resolution means forming high-resolution images from low-resolution images. In this paper, we develop a new approach based on the deep Restricted Boltzmann Machines (RBM) for image super-resolution. The RBM architecture has ability of learning a set of visual patterns, called dictionary elements from a set of training images. The learned dictionary will be then used to synthesize high resolution images. We test the proposed algorithm on both benchmark and natural images, comparing with several other techniques. The visual quality of the results has also been assessed by both human evaluation and quantitative measurement.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133994116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Spatio-temporal error concealment in video by denoised temporal extrapolation refinement 基于去噪时间外推细化的视频时空误差隐藏
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738332
Jürgen Seiler, M. Schöberl, André Kaup
{"title":"Spatio-temporal error concealment in video by denoised temporal extrapolation refinement","authors":"Jürgen Seiler, M. Schöberl, André Kaup","doi":"10.1109/ICIP.2013.6738332","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738332","url":null,"abstract":"In video communication, the concealment of distortions caused by transmission errors is important for allowing for a pleasant visual quality and for reducing error propagation. In this article, Denoised Temporal Extrapolation Refinement is introduced as a novel spatiotemporal error concealment algorithm. The algorithm operates in two steps. First, temporal error concealment is used for obtaining an initial estimate. Afterwards, a spatial denoising algorithm is used for reducing the imperfectness of the temporal extrapolation. For this, Non-Local Means denoising is used which is extended by a spiral scan processing order and is improved by an adaptation step for taking the preliminary temporal extrapolation into account. In doing so, a spatio-temporal error concealment results. By making use of the refinement, a visually noticeable average gain of 1 dB over pure temporal error concealment is possible. With this, the algorithm also is able to clearly outperform other spatio-temporal error concealment algorithms.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131622278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An enhanced approach for simultaneous image reconstruction and sensitivity map estimation in partially parallel imaging 一种改进的部分并行成像中同时图像重建和灵敏度图估计方法
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738477
Meng Liu, Yunmei Chen, Yuyuan Ouyang, X. Ye, F. Huang
{"title":"An enhanced approach for simultaneous image reconstruction and sensitivity map estimation in partially parallel imaging","authors":"Meng Liu, Yunmei Chen, Yuyuan Ouyang, X. Ye, F. Huang","doi":"10.1109/ICIP.2013.6738477","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738477","url":null,"abstract":"We develop a variational model and a faster and robust numerical algorithm for simultaneous sensitivity map estimation and image reconstruction in partially parallel MR imaging with significantly under-sampled data. The proposed model uses a maximum likelihood approach to minimizing the residue of data fitting in the presence of independent Gaussian noise. The usage of maximum likelihood estimation dramatically reduces the sensitivity to the selection of model parameter, and increases the accuracy and robustness of the algorithm. Moreover, variable splitting based on the specific structure of the objective function, and alternating direction method of multipliers (ADMM) are used to accelerate the computation. The preliminary results indicate that the proposed method resulted in fast and robust reconstruction.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131835241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Fast mosaicing of cystoscopic images from dense correspondence: Combined SURF and TV-L1 optical flow method 从密集对应中快速拼接膀胱镜图像:结合SURF和TV-L1光流方法
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738266
Sharib Ali, C. Daul, Thomas Weibel, W. Blondel
{"title":"Fast mosaicing of cystoscopic images from dense correspondence: Combined SURF and TV-L1 optical flow method","authors":"Sharib Ali, C. Daul, Thomas Weibel, W. Blondel","doi":"10.1109/ICIP.2013.6738266","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738266","url":null,"abstract":"In white light cystoscopy, bladder images are characterized by a strong texture and scene illumination variability which complicates image mosaicing. State-of-art methods exhibit high image registration accuracy at the expense of computational time. We propose an algorithm which selects either a feature based method or an optical flow method according to the image texture; for fast and accurate bladder wall mosaicing. Total variation (TV) optical flow method (deduced by duality) guarantees robust registration of poorly textured images. Realistic phantom images are registered with subpixel accuracy with a processing speed-up by a factor of 8 and 16 for two reference methods. Patient data results also illustrate the performance of the algorithm.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128919954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Projection-optimal tensor local fisher discriminant analysis for image feature extraction 投影最优张量局部fisher判别分析图像特征提取
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738587
Zhan Wang, Q. Ruan, Z. Miao
{"title":"Projection-optimal tensor local fisher discriminant analysis for image feature extraction","authors":"Zhan Wang, Q. Ruan, Z. Miao","doi":"10.1109/ICIP.2013.6738587","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738587","url":null,"abstract":"Tensor-based feature extraction approaches have been proved to be effective since they can solve the undersampled problem. In this paper, we propose a novel method called projection-optimal tensor local fisher discriminant analysis (PoTLFDA), which shares the character of local fisher discriminant analysis (LFDA). A novel affinity matrix is defined to effectively reflect the relationships of points in original tensor space and embedding space. The projection matrices are optimized by alternately solving the trace ratio problem. Convergence proof of the proposed algorithm is also given in this paper. Experiment results on face databases demonstrate the effectiveness of PoTLFDA.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"63 8","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132792291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Simple monocular door detection and tracking 简单的单目门检测和跟踪
2013 IEEE International Conference on Image Processing Pub Date : 2013-09-01 DOI: 10.1109/ICIP.2013.6738809
Rafiq Sekkal, François Pasteau, Marie Babel, Baptiste Brun, I. Leplumey
{"title":"Simple monocular door detection and tracking","authors":"Rafiq Sekkal, François Pasteau, Marie Babel, Baptiste Brun, I. Leplumey","doi":"10.1109/ICIP.2013.6738809","DOIUrl":"https://doi.org/10.1109/ICIP.2013.6738809","url":null,"abstract":"When considering an indoor navigation without using any prior knowledge of the environment, relevant landmark extraction remains an open issue for robot localization and navigation. In this paper, we consider indoor navigation along corridors. In such environments, when considering monocular cameras, doors can be seen as important landmarks. In this context, we present a new framework for door detection and tracking which exploits geometrical features of corridors. Since real-time properties are required for navigation purposes, designing solutions with a low computational complexity remains a relevant issue. The proposed algorithm relies on visual features such as lines and vanishing points that are further combined to discriminate the floor and wall planes and then to recognize doors within the image sequences. Detected doors are used to initialize a dedicated edge-based 2D door tracker. Experiments show that the framework is able to detect 82% of doors on our dataset while respecting real time constraints.","PeriodicalId":388385,"journal":{"name":"2013 IEEE International Conference on Image Processing","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131081865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信