{"title":"Optimal Multilevel Thresholding for Image Segmentation Using Contrast-Limited Adaptive Histogram Equalization and Enhanced Convergence Particle Swarm Optimization","authors":"Veola Mavis Nazareth, K. Amulya, K. Manikantan","doi":"10.1109/NCVPRIPG.2011.51","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.51","url":null,"abstract":"This paper proposes a Tsallis entropy based multilevel thresholding method for image segmentation, using Contrast-Limited Adaptive Histogram Equalization(CLAHE) and a novel algorithm called Enhanced Convergence Particle Swarm Optimization(ECPSO). This is done to optimize the thresholds so that better image segmentation is obtained. Ten test images have been used to obtain the results, which are then compared with those obtained from Genetic Algorithm(GA), Particle Swarm Optimization(PSO) and Bacterial Foraging(BF) algorithms. The results obtained by the proposed method have been found to be significantly better than those obtained by the above mentioned algorithms.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121371818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A New Measure of Detail for Triangulated Meshes","authors":"Ishaan Singh, B.V. Rohith, P.J. Naryanan","doi":"10.1109/NCVPRIPG.2011.45","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.45","url":null,"abstract":"As the complexity of 3D models used in computer graphics applications grows, there arises a need to visualize the overall distribution of detail on them. Detail is a function of the amount of information present on a surface. In this paper, we present a method to quantify detail using a combination of local measures of curvature and density. We show that detail can be used for applications like ordering for mesh decimation, visualizing abnormalities in a mesh and so on.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125058265","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Script Identification from Handwritten Document","authors":"K. Roy, S. K. Das, S. Obaidullah","doi":"10.1109/NCVPRIPG.2011.22","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.22","url":null,"abstract":"Every country has their own language and script. This may or may not common to other countries. To communicate with each other we need to have a common language. English is the language that is performing that role. So most of the countries (other than Roman) use bi-script documents. But for countries like India where we have a total of 12 official scripts (and 22 languages) things are more complex. So to have an OCR we need to identify the script by which the script the document is written (even the document is not itself multi-script). Postal document, pre-printed forms are good example of such documents. So identification of the script from a document may be written with any of these 13 scripts is a very challenging work. In this paper we have tried to identify scripts written by any of the 6 official languages of India. Here we have used very simple and efficient feature at component level for the same. Using Fractal-based features, component based feature and Topological features, series of classifiers were used. Overall accuracy of the proposed system is at present 89.48% on the test set without rejection.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115026207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Ear Detection for Online Biometric Applications","authors":"Amioy Kumar, M. Hanmandlu, M. Kuldeep, H. Gupta","doi":"10.1109/NCVPRIPG.2011.69","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.69","url":null,"abstract":"The popularity of the ear biometrics is due to its unique distinct structure and high user convenience. However, the presence of hair and other skin attributes makes the automatic detection of ear contour a real challenge for online applications. This paper presents an online biometric authentication using ear contours acquired from a robust peg free acquisition set up. The Gaussian classifiers are used to first segment the skin and non-skin areas in the ear images. Laplacian of Gaussian is then used to compute edges of the skin areas, which helped to get ear-ROI images. A localized region based active contour is finally located in the ear-ROI images. The ear-contours are then employed for the authentication using log Gabor and SIFT features. The experimental results carried out on 700 ear images confirm the utility of the proposed approach.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114386729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Effect of Block Size, Training Set and K-Value in the Classification of Food Grains Using HSI Color Model","authors":"Neelamma K. Patil, R. Yadahalli","doi":"10.1109/NCVPRIPG.2011.18","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.18","url":null,"abstract":"The method proposed for the classification of food grains can be divided into two phases: 1) color, global and local feature extraction 2) classification using extracted features. This paper presents the effect of block size, training set and K-value in the classification of food grains using HSI color model by combining color and texture information without pre-processing. The first phase in the proposed system is feature extraction. The features are computed locally and globally. The co-occurrence matrix helps to extract features locally. The non-uniformity of RGB color space is eliminated by Hue, Saturation and Intensity (HSI) color space. Further, minimum distance and K nearest neighbour algorithms are used for classification. Percentage of correct classification and error analysis are carried out by confusion matrix. The proposed work attains maximum average accuracy of 95.83% and 70.37% for block size 512x512 and 256x256 with K=5 and K=1 respectively.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129918811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sunspot Number Calculation Using Clustering","authors":"Ujjwal Dasgupta, Siddharth Singh, Varun Jewalikar","doi":"10.1109/NCVPRIPG.2011.43","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.43","url":null,"abstract":"We present a method for automatically detecting the number of sunspots and sunspot groups in standardized images of the Sun. We use adaptive thresholding to isolate the sunspots, and data clustering techniques to classify them into groups. By working on both the MDI Continuum and Magneto gram images, our method groups those sunspots together which originate from the same magnetic flux loop. To the best of our knowledge, such a method has not been used for Sunspot group analysis. To measure the accuracy of the algorithm, we compute the Sunspot Number for each image of the sun and compare that with standard results available on the Space Weather website. As our results show, this calculation can be done with competent accuracy.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Parametric Video Compression Using Epitome Based Texture Synthesis","authors":"S. Bansal, Santanu Chahudhury, B. Lall","doi":"10.1109/NCVPRIPG.2011.29","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.29","url":null,"abstract":"We present a video compression scheme using epitome based texture coding that uses a low quality video as side information and improves it by using the epitome. The side information is sent at different levels of quantization and resolution to optimize the quality against the bit rate. The concept of motion threading is used for propagation of epitome information from one frame to another. The proposed scheme shows promising results for low bit rate side information case because of the reduction in the level of blurring due to the introduction of high frequency information derived from the epitome.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131216873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tensor Voting Based Foreground Object Extraction","authors":"Mandar Kulkarni, A. Rajagopalan","doi":"10.1109/NCVPRIPG.2011.27","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.27","url":null,"abstract":"Robust foreground extraction is necessary for good performance of any computer vision application such as tracking or video surveillance. In this paper, we propose a novel foreground extraction technique for static cameras which works for indoor as well as outdoor scenes. We model colors in a background frame by Gaussians using non-iterative tensor voting framework. For input frame, we compare color features of each pixel against background model and those that do not follow the model are classified as foreground pixels. We update background model to account for scene and lighting changes over time. In the case of significant background motion, we incorporate motion vectors within tensor voting framework to reduce misclassification. Experiments show that our approach is robust to background motion, noise, illumination fluctuations, scene and lighting changes.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124580041","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Whose Album Is This?","authors":"Abhinav Goel, C. V. Jawahar","doi":"10.1109/NCVPRIPG.2011.26","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.26","url":null,"abstract":"We present a method to identify the owner of a photo album taken off a social networking site. We consider this as a problem of prominent person mining. We introduce a new notion of prominent persons, and propose a greedy solution based on an eigenface representation. We mine prominent persons in a subset of dimensions in the eigenface space. We present excellent results on multiple datasets downloaded from the Internet.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124009761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subarna Chatterjee, A. Ray, Rezaul Karim, A. Biswas
{"title":"Architecture Design for Median Filter","authors":"Subarna Chatterjee, A. Ray, Rezaul Karim, A. Biswas","doi":"10.1109/NCVPRIPG.2011.59","DOIUrl":"https://doi.org/10.1109/NCVPRIPG.2011.59","url":null,"abstract":"Speckle is a type of multiplicative noise degrading visual quality in imaging using ultrasonography(USG) resulting in difficulty of assessment by experts. Thus speckle reduction algorithms are required for enhancing image quality of USG and assisting in their visual assessment. The objective of this work is to define an efficient technique for median filter of USG images. In this paper we have captured real life USG images, applied median filter algorithm to reduce noise, and proposed a suitable hardware architecture for the implementation of this algorithm. MATLAB has been used to develop our algorithm and logic verification of the proposed architecture has been done using VHDL. We have also done the simulation and FPGA based synthesis of the proposed architecture for the most commonly used target hardware to analyze the hardware cost.","PeriodicalId":285162,"journal":{"name":"2011 Third National Conference on Computer Vision, Pattern Recognition, Image Processing and Graphics","volume":"52 1-2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126941030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}