{"title":"Interactive graph cut segmentation of touching neuronal structures from electron micrographs","authors":"V. Jagadeesh, B. S. Manjunath","doi":"10.1109/ICIP.2010.5652042","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5652042","url":null,"abstract":"A novel interactive segmentation framework comprising of a two stage s-t mincut is proposed. The framework has been designed keeping in mind the need to segment touching neuronal structures in Electron Micrograph (EM) images. The first stage undersegments the image, and groups touching structures into a single class. The second stage accepts user interaction to separate touching structures. The technique introduces user feedback through a Markov Random Field formulation. Furthermore, a method for constructing interaction potentials using an edge response function is proposed. Encouraging results, and a comparison to state of the art methods is presented.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124527690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Statistical modeling of the lung nodules in low dose computed tomography scans of the chest","authors":"A. Farag, J. Graham, S. Elshazly, A. Farag","doi":"10.1109/ICIP.2010.5651832","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5651832","url":null,"abstract":"This work presents a novel approach in automatic detection of the lung nodules and is compared with respect to parametric nodule models in terms of sensitivity and specificity. A Statistical method is used for generating data driven models of the nodules appearing in low dose CT (LDCT) scans of the human chest. Four types of common lung nodules are analyzed using the Procrustes based AAM method to create descriptive lung nodules. Performance of the new nodule models on clinical datasets is significant over parametric nodule models in both sensitivity and specificity. The new nodule modeling approach is also applicable for automatic classification of nodules into pathologies given a descriptive database. This approach is a major step forward for early diagnosis of lung cancer.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124131822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Frame rate up conversion via image fusion based on variational approach","authors":"Won-Hee Lee, Yun-jun Choi, Kyuha Choi, J. Ra","doi":"10.1109/ICIP.2010.5654075","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5654075","url":null,"abstract":"In this paper, we propose a novel framework for motion compensated frame rate up conversion. The proposed algorithm consists of two steps: generation of interpolated images and fusion of them. In the first step, we generate four different interpolated frames between the given two frames. Those frames are obtained through motion compensated interpolation, by using a different set of optical flow fields that are estimated from the four consecutive frames. In the second step, we fuse the four interpolated images into one by using a variational structure so that we can effectively remove outliers appearing due to false optical flows in occlusion regions. Experimental results demonstrate that the proposed algorithm improves objective and subjective visual qualities compared to the existing algorithms.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126324867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Elastic modulus imaging using optical flow and image registration","authors":"R. Martí, J. Noble","doi":"10.1109/ICIP.2010.5653277","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5653277","url":null,"abstract":"Elastography, the imaging technique for estimating the elastic tissue properties, or more specifically elastic modulus imaging, are becoming important diagnosis tools in computer aided diagnosis system, specially focusing on ultrasound and MRI images. This technique still presents unsolved challenges in the analysis of deformations in sequences of images. The aim of this paper is twofold: to evaluate the applicability of the deformation field obtained by state of the art optical flow and image registration algorithms for elastic modulus imaging and to quantitatively evaluate two different methods for estimation of the elastic modulus distribution. Results show that optical-flow methods provide a slightly better reconstruction and that the reconstruction has been shown to be more accurate using the method proposed by Sumi et al.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126535667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"HFAG: Hierarchical Frame Affinity Group for video retrieval on very large video dataset","authors":"Yin-Jun Miao, Chao Wang, Peng Cui, Lifeng Sun, Pin Tao, Shiqiang Yang","doi":"10.1109/ICIP.2010.5654073","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5654073","url":null,"abstract":"Content-based video retrieval systems are desired to fast and accurately find the nearest-neighbors of user input examples from very large video datasets. This poses a great challenge since exhaustive and redundant computation of similarities is required. Cluster based index approaches can be used to address this problem, but the similarity computation and clustering methods for videos are very time-consuming, thus preventing it from indexing very large video datasets. In this paper, we propose the Hierarchical Frame Affinity Group (HFAG), which is a hierarchy of frame clusters built using affinity propagation (AP) method, to represent video clusters. Our proposed video similarity metric and AP method guarantee the high performance of forming HFAG. We then build the cluster-based index structure to support retrieval of the nearest-neighbors of video sequences. The experiments on real large video datasets prove the effectiveness and efficiency of our approach.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128107401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Contour detection based on SUSAN principle and surround suppression","authors":"Zhiguo Qu, Ping Wang, Yinghui Gao, Peng Wang, Zhenkang Shen","doi":"10.1109/ICIP.2010.5651292","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5651292","url":null,"abstract":"A contour edge detector combing SUSAN principle and surround suppression is proposed in this paper. Specifically, the operator follows the flow of the Canny edge detector. Firstly, the edge gradient information and modified SUSAN principle are utilized to classify contour edge points and texture edge points approximately. Secondly, surround suppression is applied on the texture edges to suppress them. Finally, contour map is constructed through two hysteresis thresholding procedures. Performance comparison with three other detectors is made and experimental results show that our contour detector performs better.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128134929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A fast iterative kernel PCA feature extraction for hyperspectral images","authors":"Wenzi Liao, A. Pižurica, W. Philips, Y. Pi","doi":"10.1109/ICIP.2010.5651670","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5651670","url":null,"abstract":"A fast iterative Kernel Principal Component Analysis (KPCA) is proposed to extract features from hyperspectral images. The proposed method is a kernel version of the Candid Covariance-Free Incremental Principal Component Analysis, which solves the eigenvectors through iteration. Without performing eigen decomposition on Gram matrix, our method can reduce the space complexity and time complexity greatly. Experimental results were validated in comparison with the standard KPCA and linear version methods.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125654996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Topology based affine invariant descriptor for MSERs","authors":"Chenbo Shi, Guijin Wang, Xinggang Lin, Yongming Wang, Chao Liao, Quan Miao","doi":"10.1109/ICIP.2010.5653145","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5653145","url":null,"abstract":"This paper introduces a topology based affine invariant descriptor for maximally stable extremal regions (MSERs). The popular SIFT descriptor computes the texture information on a grey-scale patch. Instead our descriptor use only the topology and geometric information among MSERs so that features can be rapidly matched regardless of the texture in the image patch. Based on the ellipses fitting for the detected MSERs, geometric affine invariants between ellipses pair are extracted as the descriptors. Finally topology based voting selector is designed to achieve the best correspondences. Experiment shows that our descriptor is not only computational faster than SIFT descriptor, but also has better performance on wide angle of view and nonlinear illumination change. In addition, our descriptor shows a good result on multi sensor images registration.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125655241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jingtao Cui, S. Mathur, Michele Covell, Vivek Kwatra, Mei Han
{"title":"Example-based image compression","authors":"Jingtao Cui, S. Mathur, Michele Covell, Vivek Kwatra, Mei Han","doi":"10.1109/ICIP.2010.5652402","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5652402","url":null,"abstract":"The current standard image-compression approaches rely on fairly simple predictions, using either block- or wavelet-based methods. While many more sophisticated texture-modeling approaches have been proposed, most do not provide a significant improvement in compression rate over the current standards at a workable encoding complexity level. We re-examine this area, using example-based texture prediction. We find that we can provide consistent and significant improvements over JPEG, reducing the bit rate by more than 20% for many PSNR levels. These improvements require consideration of the differences between residual energy and prediction/residual compressibility when selecting a texture prediction, as well as careful control of the computational complexity in encoding.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122271688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Implementation and optimization of image processing algorithms on handheld GPU","authors":"Nitin Singhal, I. Park, Sungdae Cho","doi":"10.1109/ICIP.2010.5651740","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5651740","url":null,"abstract":"The advent of GPUs with programmable shaders on handheld devices has motivated embedded application developers to utilize GPU to offload computationally intensive tasks and relieve the burden from embedded CPU. In this work, we propose an image processing toolkit on handheld GPU with programmable shaders using OpenGL ES 2.0 API. By using the image processing toolkit, we show that a range of image processing algorithms map readily to handheld GPU. We employ real-time video scaling, cartoon-style non-photorealistic rendering, and Harris corner detector as our example applications. In addition, we propose techniques to achieve increased performance with optimized shader design and efficient sharing of GPU workload between vertex and fragment shaders. Performance is evaluated in terms of frames per second at varying video stream resolution.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121440059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}