2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)最新文献

筛选
英文 中文
A combined fuzzy and level sets' based approach for brain MRI image segmentation 基于模糊和水平集的脑MRI图像分割方法
B. Anami, Prakash H. Unki
{"title":"A combined fuzzy and level sets' based approach for brain MRI image segmentation","authors":"B. Anami, Prakash H. Unki","doi":"10.1109/NCVPRIPG.2013.6776216","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776216","url":null,"abstract":"The different tissues namely gray matter (GM) white matter (WM), and cerebrospinal fluid (CSF) are spread over the entire brain. It is difficult to demarcate them individually when a brain image is considered. The boundaries are not well defined. Modified fuzzy C means (MFCM) and level sets segmentation based methodology is proposed in this paper for automated brain MRI image segmentation into WM, GM and CSF. The initial segmentation is done by MFCM approach and the results thus obtained are input to the level set methodology. We have tested the methodology on 100 different brain MRI images. The results are compared by using individual MFCM and level set segmentation methods. We took the opinion of 10 expert radiologists to corroborate our results. The results are validated by radiologists as `Accurate', `Satisfactory', `Adequate' and `Not acceptable'. The results obtained using only level set are `not acceptable'. Most of the results obtained using MFCM are `Adequate'. The results obtained using combined method are `Satisfactory'. Hence, the results obtained using combined MFCM and level sets based segmentation are considered better than using individual MFCM and level set segmentation methods. The manual intervention is avoided in the combined approach. The time required to segment using combined approach is also less compared to level set method. The segmentation using proposed methodology is helpful for radiologists in hospitals for brain MRI image analysis.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116158753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Decision fusion for robust horizon estimation using Dempster Shafer Combination Rule 基于Dempster Shafer组合规则的鲁棒地平估计决策融合
R. Tabib, Ujwala Patil, Syed Altaf Ganihar, N. Trivedi, U. Mudenagudi
{"title":"Decision fusion for robust horizon estimation using Dempster Shafer Combination Rule","authors":"R. Tabib, Ujwala Patil, Syed Altaf Ganihar, N. Trivedi, U. Mudenagudi","doi":"10.1109/NCVPRIPG.2013.6776247","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776247","url":null,"abstract":"In this paper, we address the problem of decision fusion for robust horizon estimation using Dempster Shafer Combination Rule (DSCR). We provide a framework for decision fusion to select robust horizon estimate out of `n' estimates, based on confidence factor. Vision-based attitude estimation depends on robust horizon estimation and no single algorithm gives accurate results for different kind of scenarios. We propose to combine the evidence parameters to generate confidence factor using DSCR to justify the correctness of the estimated horizon. We compute Confidence Interval (CI) based on Gaussian Mixture Model (GMM). We also propose two techniques to provide evidence parameters for the estimated horizon using CI. We demonstrate the effectiveness of the decision framework on clear and noisy data sets of simulated and real images/videos captured by Micro Air Vehicle (MAV).","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122089757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
High performance VLSI implementation of Context-based Adaptive Variable Length Coding (CAVLC) for H.264 encoder H.264编码器基于上下文的自适应变长编码(CAVLC)的高性能VLSI实现
R. Mukherjee, V. Mahajan, I. Chakrabarti, S. Sengupta
{"title":"High performance VLSI implementation of Context-based Adaptive Variable Length Coding (CAVLC) for H.264 encoder","authors":"R. Mukherjee, V. Mahajan, I. Chakrabarti, S. Sengupta","doi":"10.1109/NCVPRIPG.2013.6776186","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776186","url":null,"abstract":"The video coding standard H.264 uses Context-based Adaptive Variable Length Coding (CAVLC) as one of its entropy encoding techniques. This paper proposes VLSI architecture for CAVLC algorithm. The designed hardware meets the required speed of H.264 without compromising the hardware cost. The CAVLC encoder works at a maximum clock frequency of 126 MHz when implemented in Xilinx 10.1i, Virtex-5 technology. The speed is quite appreciable when compared to other existing works. The implemented architecture meets the required rate for processing of HD-1080 format video sequence.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123578043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multispectral palmprint matching based on joint sparse representation 基于联合稀疏表示的多光谱掌纹匹配
B. H. Shekar, N. Harivinod
{"title":"Multispectral palmprint matching based on joint sparse representation","authors":"B. H. Shekar, N. Harivinod","doi":"10.1109/NCVPRIPG.2013.6776243","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776243","url":null,"abstract":"A novel method for multispectral palmprint matching based on the joint sparse representation is proposed. We use joint sparse representation to model the identity assurance system that involves identification as well as verification. The method represents the given palmprint as a linear combination of the multispectral palmprints. The information from different spectrum are fused by means of feature level fusion. The nearest neighbour classification based on class wise reconstruction error is used for classification. Experiments are conducted on PolyU multispectral palmprint database. The results show that the proposed method works better in comparison with the existing techniques.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122269456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Tsallis and Renyi's embedded entropy based mutual information for multimodal image registration tallis和Renyi基于互信息的嵌入熵多模态图像配准
Subhaluxmi Sahoo, P. Nanda, Sunita Samant
{"title":"Tsallis and Renyi's embedded entropy based mutual information for multimodal image registration","authors":"Subhaluxmi Sahoo, P. Nanda, Sunita Samant","doi":"10.1109/NCVPRIPG.2013.6776207","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776207","url":null,"abstract":"In this paper, an embedded entropy based image registration scheme has been proposed. Here, Tsallis and Renyi's entropy have been embedded to form a new entropic measure. This parametrized entropy has been used to determine the weighted mutual information (MI) for the CT and MR brain images. The embedded mutual information has been maximized to obtain registration. This notion of embedded mutual information has also been validated in feature space registration. The mutual information with respect to the registration parameter has been found to be a nonlinear curve. It has been found that the feature space registration resulted in higher value mutual information and hence registration process could be smoother. We have used Simulated Annealing algorithm to determine the maximum of this embedded mutual information and hence register the images.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129115606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Predicting the imagined contents using brain activation 通过大脑激活来预测想象的内容
Krishna P. Miyapuram, W. Schultz, P. Tobler
{"title":"Predicting the imagined contents using brain activation","authors":"Krishna P. Miyapuram, W. Schultz, P. Tobler","doi":"10.1109/NCVPRIPG.2013.6776230","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776230","url":null,"abstract":"Mental imagery refers to percept-like experiences in the absence of sensory input. Brain imaging studies suggest common, modality-specific, neural correlates imagery and perception. We associated abstract visual stimuli with either visually presented or imagined monetary rewards and scrambled pictures. Brain images for a group of 12 participants were collected using functional magnetic resonance imaging. Statistical analysis showed that human midbrain regions were activated irrespective of the monetary rewards being imagined or visually present. A support vector machine trained on the midbrain activation patterns to the visually presented rewards predicted with 75% accuracy whether the participants imagined the monetary reward or the scrambled picture during imagination trials. Training samples were drawn from visually presented trials and classification accuracy was assessed for imagination trials. These results suggest the use of machine learning technique for classification of underlying cognitive states from brain imaging data.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129527190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multi-spectral demosaicing technique for single-sensor imaging 单传感器成像的多光谱去马赛克技术
H. Aggarwal, A. Majumdar
{"title":"Multi-spectral demosaicing technique for single-sensor imaging","authors":"H. Aggarwal, A. Majumdar","doi":"10.1109/NCVPRIPG.2013.6776236","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776236","url":null,"abstract":"A generic-filter array design have been proposed to capture multi-spectral images using hypothetical single-sensor multi-spectral cameras. The design idea is based on uniform sampling of intensity values from each band irrespective of spectral properties of any particular band. A reconstruction technique have also been proposed to linearly interpolate unknown intensity values of other bands at each pixel. Proposed technique was evaluated using two multispectral image datasets where one was of Landsat satellite and another was of cooled CCD camera Apogee Alta U260. Quantitative evaluation of the proposed technique was done using peak signal to noise ratio.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129564336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Dynamic signature verification for secure retrieval of classified information 动态签名验证,用于安全检索机密信息
Jayashri Vajpai, J. B. Arun, Ishani Vajpai
{"title":"Dynamic signature verification for secure retrieval of classified information","authors":"Jayashri Vajpai, J. B. Arun, Ishani Vajpai","doi":"10.1109/NCVPRIPG.2013.6776170","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776170","url":null,"abstract":"With the growth of web enabled services and e-commerce, tremendous amount of information is now readily available on the Internet. A large proportion of this is classified information, which has to be protected against unauthorized access. Password or PIN can be used in conjunction with digital signature, for verification of the identity of users. This paper proposes a dynamic handwritten signature verification based access control system that can be employed in the legal, banking and commercial domains for designing secure information retrieval systems. The dynamic handwritten signature in this system is captured by using a digital tablet or PDA (Personal Digital Assistant) with contact sensitive acquisition system. After preprocessing, the signature data is compared with the templates of authorized signatures by employing an innovative neuro-fuzzy pattern recognition system based on sensing the pressure variable and total time required for executing the signature for uniquely identifying the potential user. The error in matching is used to arrive at the decision regarding permission or denial of access to the classified document. The neuro-fuzzy technique applied in the dynamic signature system is based on evolving fuzzy neural network. This technique has been tested on signatures drawn from signature verification competition database obtained from the internet. Encouraging results show that this technique is a good candidate for the development of practical applications.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"537 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127981573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Removal of hand-drawn annotation lines from document images by digital-geometric analysis and inpainting 通过数字几何分析和绘图从文档图像中去除手绘注释线
Sanjoy Pratihar, Partha Bhowmick, S. Sural, J. Mukhopadhyay
{"title":"Removal of hand-drawn annotation lines from document images by digital-geometric analysis and inpainting","authors":"Sanjoy Pratihar, Partha Bhowmick, S. Sural, J. Mukhopadhyay","doi":"10.1109/NCVPRIPG.2013.6776179","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776179","url":null,"abstract":"Performance of an OCR system is badly affected due to presence of hand-drawn annotation lines in various forms, such as underlines, circular lines, and other text-surrounding curves. Such annotation lines are drawn by a reader usually in free hand in order to summarize some text or to mark the keywords within a document page. In this paper, we propose a generalized scheme for detection and removal of these hand-drawn annotations from a scanned document page. An underline drawn by hand is roughly horizontal or has a tolerable undulation, whereas for a hand-drawn curved line, the slope usually changes at a gradual pace. Based on this observation, we detect the cover of an annotation object-be it straight or curved-as a sequence of straight edge segments. The novelty of the proposed method lies in its ability to compute the exact cover of the annotation object, even when it touches or passes through any text character. After getting the annotation cover, an effective method of inpainting is used to quantify the regions where text reconstruction is needed. We have done our experimentation with various documents written in English, and some results are presented here to show the efficiency and robustness of the proposed method.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117171379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Time driven video summarization using GMM 使用GMM的时间驱动视频摘要
Sujatha C, Ravindra Akshay, Chivate, Sayed Altaf Ganihar, U. Mudenagudi
{"title":"Time driven video summarization using GMM","authors":"Sujatha C, Ravindra Akshay, Chivate, Sayed Altaf Ganihar, U. Mudenagudi","doi":"10.1109/NCVPRIPG.2013.6776205","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2013.6776205","url":null,"abstract":"In this paper, we propose a method to browse the activities present in the longer videos for the user defined time. Browsing of activities is important for variety of applications and consumes large amount of viewing time for longer videos. The aim is to generate a summary of the video by retaining salient activities in a given time. We propose a method for selection of salient activities using motion of feature points as a key parameter, where the saliency of a frame depends on total motion and specified time for summarization. The motion information in a video is modeled as a Gaussian mixture model (GMM), to estimate the key motion frames in the video. The salient frames are detected depending upon the motion strength of the keyframe and user specified time, which contributes for the summarization keeping the chronology of activities. The proposed method finds applications in summarization of surveillance videos, movies, TV serials etc. We demonstrate the proposed method on different types of videos and achieve comparable results with stroboscopic approach and also maintain the chronology with an average retention ratio of 95%.","PeriodicalId":436402,"journal":{"name":"2013 Fourth National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics (NCVPRIPG)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124036677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信