{"title":"Extraction of 3D Line Segment Using Disparity Map","authors":"Dong-Min Woo, Dong-Chul Park, Seung-Soo Han","doi":"10.1109/ICDIP.2009.31","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.31","url":null,"abstract":"3D line segment can be regarded as one of the most useful features in constructing 3D model. In this context, this paper presents anew 3D line segment extraction method by using disparity map generated in the process of stereo matching. The core of our technique is that feature matching is carried out by the reference of the disparity evaluated by area-based stereo. Since the reference of the disparity can significantly reduce the number of feature matching combinations, feature matching error can be drastically minimized. One requirement of the disparity to be referenced is that it should be reliable to be used in feature matching. To measure the reliability of the disparity, in this paper, we employ the self-consistency of the disparity. Our suggested technique misapplied to the detection of 3D line segments by 2D line matching using our hybrid stereo matching, which can be efficiently utilized in the generation of the rooftop model from urban images.Since occlusions are occurred around the outlines of buildings, we use multi-image stereo scheme by fusing 3D line segments extracted from several pairs of stereo images. The suggested method misevaluated on Avenches data set of Ascona aerial images.Experimental results indicate that the extracted 3D line segments have an average error of 0.5m and can be efficiently used to the construction of 3D site models using a simple 3D line grouping.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133789763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-pipeline Architecture for Face Recognition on FPGA","authors":"Sathaporn Visakhasart, O. Chitsobhuk","doi":"10.1109/ICDIP.2009.48","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.48","url":null,"abstract":"In this paper, a new multi-pipeline architecture is proposed for face recognition system on FPGA. The proposed structure consists of four main units: Multi-Pipeline Control Unit (MPCU), Process Element Unit (PEU), Region Summing Unit (RSU), and Recognition Indexing Unit (RIU). Four recognition techniques: Principal Component Analysis (PCA), Modular PCA (MPCA), Weight MPCA (WMPCA), and Wavelet based techniques are adopted to evaluate the efficiency of the proposed architecture using several standard face databases. The experimental results show that the proposed architecture helps minimizing processing time through its multi-pipeline processes while still maintains high recognition rate. Moreover, the design has encouraged the reduction in hardware resources by utilizing the proposed reusable modules.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132711785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Using Image Deconvolution to Increase the Ability to Detect Stars and Faint Orbital Objects in CCD Imaging","authors":"J. Núñez, O. Fors, A. Prades","doi":"10.1109/ICDIP.2009.12","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.12","url":null,"abstract":"In this paper we show how the techniques of image deconvolution can increase the ability to detect faint stars or faint orbital objects (small satellite and space debris) in CCD images. In the case of faint stars, we show that this benefit is equivalent to double the quantum efficiency of the CCD detector or to increase the effective telescope aperture by up to 40% without decreasing the astrometric precision or introducing artificial bias. In the case of orbital objects, the deconvolution technique can double the signal to noise ratio helping to discover and control dangerous objects as space debris or lost satellites.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117159228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Varun Kumar, M. K. Gupta, A. Chaturvedi, Anuj Bhardwaj, M. Singh
{"title":"Click to Zoom-Inside Graphical Authentication","authors":"Varun Kumar, M. K. Gupta, A. Chaturvedi, Anuj Bhardwaj, M. Singh","doi":"10.1109/ICDIP.2009.65","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.65","url":null,"abstract":"We propose and evaluate the usability and security of Click to Zoom-inside (CTZ); a new graphical password authentication mechanism. Users have to click six times on one point in some given specific regions (pass regions) shown with dotted lines in a theme image displayed on the screen. The selected region is then zoom to create a next image. Exactly, we are not going to zoom the region object of the theme image up to six times; rather we are replacing the image with another image of the same object in big size. The next image is based on the previous click-region. We secure our scheme from shoulder surfer attacking by using WIW scheme with our scheme. We also present the results of an initial user study which revealed positive results. Performance was very good in terms of speed, accuracy, and number of errors in recognizing the images. We can also demonstrate that CTZ provides greater security than Pass Points because the number of images increases the workload for attackers also it is more user friendly and attractive than other competitive schemes. It is just parallel to Cued Click Scheme. It meets the today’s requirement of extremely high security.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122537599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Combination Scheme for Fuzzy Partitions Based on Fuzzy Weighted Majority Voting Rule","authors":"Chunsheng Li, Yao-nan Wang, H. Dai","doi":"10.1109/ICDIP.2009.35","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.35","url":null,"abstract":"This paper devotes to the combination of fuzzy partitions with the same number of clusters by means of generalizing the weighted majority voting rule to fuzzy weighted majority voting rule. The difficulties of this generalization are to establish the correspondences among the classes and determine the weight coefficients of component fuzzy partitions. We propose a class-matching algorithm based on Hungarian method and generalize pattern recognition rate to fuzzy pattern recognition rate to overcome the difficulties. Employing the proposed class-matching algorithm and the fuzzy weighted majority voting rule, a combining scheme for fuzzy partitions is developed.Experimental results on real datasets show that the proposed ensemble of fuzzy partitions outperforms or is comparable to other two existed ensembles of fuzzy partitions in terms of most evaluation indexes for fuzzy partition.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122161965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Neural Image Analysis of Maturity Stage during Composting of Sewage Sludge","authors":"P. Boniecki, J. Dach, K. Nowakowski, A. Jakubek","doi":"10.1109/ICDIP.2009.85","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.85","url":null,"abstract":"The paper presents the experiments of compost images analysis carried out with two types of digital cameras working in daylight and ultraviolet light. The data collected with two cameras were analysed with the usage of neural network model (using part of application Statistica v. 8.0). The results of image analysis were combined also with the results of chemical and physical analysis of composted material in different stage of the composting process.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132460328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Counting of Leukocytes in Giemsa-Stained Images of Peripheral Blood Smear","authors":"Mohammad Hamghalam, A. Ayatollahi","doi":"10.1109/ICDIP.2009.9","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.9","url":null,"abstract":"There are many different classes of leukocyte in peripheral blood image. Leukocyte count is used to determine the presence of an infection in the human body. To be able to observe and recognize the different kinds of leukocyte, you must stain them. For this purpose, normally Giemsa stain is used. There are two difficult issues in image segmentation which common segmentation algorithms can not overcome them. Nucleus which is laid inside white cell is the darkest part of image which can be used to count cells. Since Giemsa staining is done by humans, intensity of images is slightly different from each others. Neutrophils are kinds of leukocytes which have segmented and distinctive nucleus. These reasons cause a considerable error in counting. In this paper, we have proposed to use histogram of images and intensity of red cells which are major objects in images to select appropriate point for thresholding. And then the distances among centers of the extracted nuclei have been calculated, according to the specified size of leukocytes, we merge the nuclei which those distances are less than the diameter of one leukocyte. Experimental results show that our approach is very efficient.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132285007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on the Video Segmentation Method with Integrated Multi-features Based on GMM","authors":"Herong Zheng, Zhi Liu, Xiaofeng Wang","doi":"10.1109/CIMCA.2008.112","DOIUrl":"https://doi.org/10.1109/CIMCA.2008.112","url":null,"abstract":"Video segmentation is a hot issue in the image research field. In the current video segmentation method, the pixel color feature in a frame is only considered. The pertinent problem between adjacent pixels is not taken into account. This paper proposes a video segmentation method based on GMM (Gaussian Mixture Model) modeling, meanwhile a method integrating the neighborhood characteristic of a pixel, such as pixel color and brightness characteristic is considered. The neighbor characteristic of a pixel can be a good solution for the bad segmentation result because of the tiny change in the background. The characteristic of brightness and chromaticity can solve the problem arising from the light and shadow change. In this method, the Gaussian mixture models for each pixel are built firstly. Then the relevant parameters are trained and identified. Combining the neighbor characteristic of pixel, brightness and chromaticity, the video can be segmented. Experiment results show that this method compared with other methods improves the video segmentation results.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128608415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Bessalah, F. Alim-Ferhat, H. Salhi, S. Seddiki, M. Issad, O. Kerdjidj
{"title":"On Line Wavelets Transform on a Xilinx FPGA Circuit to Medical Images Compression","authors":"H. Bessalah, F. Alim-Ferhat, H. Salhi, S. Seddiki, M. Issad, O. Kerdjidj","doi":"10.1109/ICDIP.2009.89","DOIUrl":"https://doi.org/10.1109/ICDIP.2009.89","url":null,"abstract":"Knowing that, the computing process of the S.Mallat Transform algorithm is characterized by a purely sequential structure, and from the fact, the on line mode arithmetic is more suitable for the computation of this kind of operations. We propose in this paper, a new wavelet Transform algorithm and a suitable architecture implemented on a Xilinx FPGA circuit. In this study, we will show how on line arithmetic is used to implement a pipelined architecture of the S.Mallat Transform and we will demonstrate through different implementations under different medical image and different computation mode that it might be used successfully for medical image compression.","PeriodicalId":206267,"journal":{"name":"2009 International Conference on Digital Image Processing","volume":"1874 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115095486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}