2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)最新文献

筛选
英文 中文
A robust faint line detection and enhancement algorithm for mural images 一种鲁棒的壁画图像微弱线检测与增强算法
Mrinmoy Ghorai, B. Chanda
{"title":"A robust faint line detection and enhancement algorithm for mural images","authors":"Mrinmoy Ghorai, B. Chanda","doi":"10.1109/NCVPRIPG.2013.6776175","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776175","url":null,"abstract":"Mural images are noisy and consist of faint and broken lines. Here we propose a novel technique for straight and curve line detection and also an enhancement algorithm for deteriorated mural images. First we compute some statistics on gray image using oriented templates. The outcome of the process are taken as a strength of the line at each pixel. As a result some unwanted lines are also detected in the texture region. Based on Gestalt law of continuity we propose an anisotropic refinement to strengthen the true lines and to suppress the unwanted ones. A modified bilateral filter is employed to remove the noises. Experimental result shows that the approach is robust to restore the lines in the mural images.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134599349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improving video summarization based on user preferences 改进基于用户偏好的视频摘要
R. Kannan, G. Ghinea, Sridhar Swaminathan, Suresh Kannaiyan
{"title":"Improving video summarization based on user preferences","authors":"R. Kannan, G. Ghinea, Sridhar Swaminathan, Suresh Kannaiyan","doi":"10.1109/NCVPRIPG.2013.6776187","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776187","url":null,"abstract":"Although in the past, several automatic video summarization systems had been proposed to generate video summary, a generic summary based only on low-level features will not satisfy every user. As users' needs or preferences for the summary vastly differ for the same video, a unique personalized and customized video summarization system becomes an urgent need nowadays. To address this urgent need, this paper proposes a novel system for generating unique semantically meaningful video summaries for the same video, that are tailored to the preferences or interests of the users. The proposed system stitches video summary based on summary time span and top-ranked shots that are semantically relevant to the user's preferences. The experimental results on the performance of the proposed video summarization system are encouraging.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134266744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Spatio-temporal feature based VLAD for efficient video retrieval 基于时空特征的VLAD高效视频检索
M. K. Reddy, Sahil Arora, R. Venkatesh Babu
{"title":"Spatio-temporal feature based VLAD for efficient video retrieval","authors":"M. K. Reddy, Sahil Arora, R. Venkatesh Babu","doi":"10.1109/NCVPRIPG.2013.6776268","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776268","url":null,"abstract":"Compact representation of visual content has emerged as an important topic in the context of large scale image/video retrieval. The recently proposed Vector of Locally Aggregated Descriptors (VLAD) has shown to outperform other existing techniques for retrieval. In this paper, we propose two spatio-temporal features for constructing VLAD vectors for videos in the context of large scale video retrieval. Given a particular query video, our aim is to retrieve similar videos from the database. Experiments are conducted on UCF50 and HMDB51 datasets, which pose challenges in the form of camera motion, view-point variation, large intra-class variation, etc. The paper proposes the following two spatio-temporal features for constructing VLADs i) Local Histogram of Oriented Optical Flow (LHOOF), and ii) Space-Time Invariant Points (STIP). The performance of these proposed features are compared with SIFT based spatial feature. The mean average precision (MAP) indicates the better retrieval performance of the proposed spatio-temporal feature over spatial feature.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134037093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Improvised eigenvector selection for spectral Clustering in image segmentation 基于特征向量选择的光谱聚类图像分割
Aditya Prakash, S. Balasubramanian, R. R. Sarma
{"title":"Improvised eigenvector selection for spectral Clustering in image segmentation","authors":"Aditya Prakash, S. Balasubramanian, R. R. Sarma","doi":"10.1109/NCVPRIPG.2013.6776233","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776233","url":null,"abstract":"General spectral Clustering(SC) algorithms employ top eigenvectors of normalized Laplacian for spectral rounding. However, recent research has pointed out that in case of noisy and sparse data, all top eigenvectors may not be informative or relevant for the purpose of clustering. Use of these eigenvectors for spectral rounding may lead to bad clustering results. Self-tuning SC method proposed by Zelnik and Perona [1] places a very stringent condition of best alignment possible with canonical coordinate system for selection of relevant eigenvectors. We analyse their algorithm and relax the best alignment criterion to an average alignment criterion. We demonstrate the effectiveness of our improvisation on synthetic as well as natural images by comparing the results using Berkeley segmentation and benchmarking dataset.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124783337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Salient object detection in SfM point cloud SfM点云中的显著目标检测
Divyansh Agarwal, N. Soni, A. Namboodiri
{"title":"Salient object detection in SfM point cloud","authors":"Divyansh Agarwal, N. Soni, A. Namboodiri","doi":"10.1109/NCVPRIPG.2013.6776194","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776194","url":null,"abstract":"In this paper we present a max-flow min-cut based salient object detection in 3D point cloud that results from Structure from Motion (SfM) pipeline. The SfM pipeline generates noisy point cloud due to the unwanted scenes captured along with the object in the image dataset of SfM. The background points being sparse and not meaningful, it becomes necessary to remove them. Hence, any further processes (like surface reconstruction) utilizing the cleaned up model will have no hinderance from the noise removed. We present a novel approach where the camera centers are used to segment out the salient object. The algorithm is completely autonomous and does not need any user input. We test our proposed method on Indian historical models reconstructed through SfM. We evaluate the results in terms of selectivity and specificity.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114758689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pan-sharpening based on Non-subsampled Contourlet Transform detail extraction 基于非下采样Contourlet变换细节提取的泛锐化
Kishor P. Upla, P. Gajjar, M. Joshi
{"title":"Pan-sharpening based on Non-subsampled Contourlet Transform detail extraction","authors":"Kishor P. Upla, P. Gajjar, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776258","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776258","url":null,"abstract":"In this paper, we propose a new pan-sharpening method using Non-subsampled Contourlet Transform. The panchromatic (Pan) and multi-spectral (MS) images provided by many satellites have high spatial and high spectral resolutions, respectively. The pan-sharpened image which has high spatial and spectral resolutions is obtained by using these images. Since the NSCT is shift invariant and it has better directional decomposition capability compared to contourlet transform, we use it to extract high frequency information from the available Pan image. First, two level NSCT decomposition is performed on the Pan image which has high spatial resolution. The required high frequency details are obtained by using the coarser subband available after the two level NSCT decomposition of the Pan image. The coarser sub-band is subtracted from the original Pan image to obtain these details. These extracted details are then added to MS image such that the original spectral signature is preserved in the final fused image. The experiments have been conducted on images captured from different satellite sensors such as IKonos-2, Worlview-2 and Quickbird. The traditional quantitative measures along with quality with no reference (QNR) index are evaluated to check the potential of the proposed method. The proposed approach performs better compared to the recently proposed state of the art methods such as additive wavelet luminance proportional (AWLP) method and context based decision (CBD) method.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124852147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A learning based approach for dense stereo matching with IGMRF prior 基于学习的IGMRF先验密集立体匹配方法
S. Nahar, M. Joshi
{"title":"A learning based approach for dense stereo matching with IGMRF prior","authors":"S. Nahar, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776264","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776264","url":null,"abstract":"In this paper, we propose a learning based approach for solving the problem of dense stereo matching problem using edge preserving regularization prior. Given the test stereo pair and a training database consisting of disparity maps estimated using multiple views stereo images and their corresponding ground truths, we obtain the disparity map for the test set. We first obtain an initial disparity estimate by learning the disparities from the available database. A new learning based approach is proposed for obtaining the initial estimate that uses the estimated and the true disparities. Since the disparity estimation is an ill posed problem, we obtain the final disparity map using a regularization framework. The prior model for the disparity map is chosen as an Inhomogeneous Gaussian Markov Random Field (IGMRF). Assuming that the spatial variations among the disparity values captured in an initial estimate correspond to the variations in true disparities, we obtain the IGMRF parameters at every pixel location using the initial estimate. A graph cuts based method is used to optimize the energy function in order to obtain the global minimum. Experimental results on the standard dataset demonstrate the effectiveness of the proposed approach.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127834521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Enhancement of camera captured text images with specular reflection 通过镜面反射增强相机捕获的文本图像
A. Visvanathan, T. Chattopadhyay, U. Bhattacharya
{"title":"Enhancement of camera captured text images with specular reflection","authors":"A. Visvanathan, T. Chattopadhyay, U. Bhattacharya","doi":"10.1109/NCVPRIPG.2013.6776189","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776189","url":null,"abstract":"Specular reflection of light degrades the quality of scene images. Whenever specular reflection affects the text portion of such an image, its readability is reduced significantly. Consequently, it becomes difficult for an OCR software to detect and recognize similar texts. In the present work, we propose a novel but simple technique to enhance the region of the image with specular reflection. The pixels with specular reflection were identified in YUV color plane. In the next step, it enhances the region by interpolating possible pixel values in YUV space. The proposed method has been compared against a few existing general purpose image enhancement techniques which include (i) histogram equalization, (ii) gamma correction and (iii) Laplacian filter based enhancement method. The proposed approach has been tested on some images from ICDAR 2003 Robust Reading Competition image database. We computed a Mean Opinion Score based measure to show that the proposed method outperforms the existing enhancement techniques for enhancement of readability of texts in images affected by specular reflection.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116848309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multi-resolution image fusion using multistage guided filter 基于多级引导滤波器的多分辨率图像融合
Sharad Joshi, Kishor P. Upla, M. Joshi
{"title":"Multi-resolution image fusion using multistage guided filter","authors":"Sharad Joshi, Kishor P. Upla, M. Joshi","doi":"10.1109/NCVPRIPG.2013.6776257","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776257","url":null,"abstract":"In this paper, we propose a multi-resolution image fusion approach based on multistage guided filter (MGF). Given the high spatial resolution panchromatic (Pan) and high spectral resolution multi-spectral (MS) images, the multi-resolution image fusion algorithm obtains a single fused image having both the high spectral and the high spatial resolutions. Here, we extract the missing high frequency details of MS image by using multistage guided filter. The detail extraction process exploits the relationship between the Pan and MS images by utilizing one of them as a guidance image and extracting details from the other. This way the spatial distortion of MS image is reduced by consistently combining the details obtained using both types of images. The final fused image is obtained by adding the extracted high frequency details to corresponding MS image. The results of the proposed algorithm are compared with the commonly used traditional methods as well as with a recently proposed method using Quickbird, Ikonos-2 and Worldview-2 satellite images. The quantitative assessment is evaluated using the conventional measures as well as using a relatively new index i.e., quality with no reference (QNR) which does not require a reference image. The results and measures clearly show that there is significant improvement in the quality of the fused image using the proposed approach.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129178476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Estimation of the orientation and distance of a mirror from Kinect depth data 根据Kinect深度数据估计镜子的方向和距离
Tanwi Mallick, P. Das, A. Majumdar
{"title":"Estimation of the orientation and distance of a mirror from Kinect depth data","authors":"Tanwi Mallick, P. Das, A. Majumdar","doi":"10.1109/NCVPRIPG.2013.6776248","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776248","url":null,"abstract":"In many common applications of Microsoft Kinect™ including navigation, surveillance, 3D reconstruction, and the like; it is required to estimate the geometry of mirrors or other reflecting surfaces existing in the field of view. This often is difficult as in most positions a mirror does not support diffuse reflection of speckles and hence cannot be seen in the Kinect depth map. A mirror shows up as unknown depth. However, suitably placed objects reflecting in the mirror can provide important clues for the orientation and distance of the mirror. In this paper we present a method using a ball and its mirror image to set-up point-to-point correspondence between object and image points to solve for the geometry of the mirror. With this simple estimators are designed for the orientation and distance of a plane vertical mirror with respect to the Kinect camera. In addition an estimator is presented for the diameter of the ball. The estimators are validated through a set of experiments.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128557878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信