{"title":"A face super-resolution approach using shape semantic mode regularization","authors":"Chengdong Lan, R. Hu, Zhen Han, Zhongyuan Wang","doi":"10.1109/ICIP.2010.5649896","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5649896","url":null,"abstract":"In actual imaging environment, a variety of factors have an impact on the quality of images, which leads to pixel distortion and aliasing. The traditional face super-resolution algorithm only uses the difference of image pixel values as similarity criterion, which degrades similarity and identification of reconstructed facial images. Image semantic information with human understanding, especially structural information, is robust to the degraded pixel values. In this paper, we propose a face super-resolution approach using shape semantic model. This method describes the facial shape as a series of fiducial points on facial image. And shape semantic information of input image is obtained manually. Then a shape semantic regularization is added to the original objective function. The steepest descent method is used to obtain the unified coefficient. Experimental results demonstrate that the proposed method outperforms the traditional schemes significantly both in subjective and objective quality.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125087123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extracting corner-cue feature to improve minutiae-matching accuracy","authors":"Jiajia Lei, Xinge You, Long Zhou, W. Zeng","doi":"10.1109/ICIP.2010.5654206","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5654206","url":null,"abstract":"This paper proposes a new feature of fingerprint, called corner-cue. It is based on the curvature of fingerprint ridges. To extract the corner-cue, we first compute the curvature of fingerprint ridges and find the local maximum curvature points. Without regard to the high curvature points near minutiae, corner-cues are obtained. Corner-cues are further utilized in the matching stage to enhance the system's performance. Since high curvature points are important features of a fingerprint, the proposed method can obtain better results than conventional solely minutiae-based methods. Experimental results illustrate its effectiveness.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123561727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust range image registration using 3D lines","authors":"Jian Yao, M. Ruggeri, P. Taddei, V. Sequeira","doi":"10.1109/ICIP.2010.5652449","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5652449","url":null,"abstract":"We present an efficient method for accurate automatic registration of two geometrically complex 3D range scans by using 3D lines. We first detect edges from the associated 2D reflectance images and collect 3D edge contours by only taking into account valid foreground points. Then we use an efficient split-and-merge line fitting algorithm to detect 3D lines. We build a fast search codebook to efficiently match the two sets of 3D lines. This is done by computing the orientation angle and distance of pairs of 3D lines in each set, both of which are invariant under rigid transformations. Finally we recover the rigid transformation between two scans using an efficient RANSAC algorithm with robust transformation estimation that exploits two sets of corresponding 3D lines. We conclude presenting experimental results that demonstrate efficiency and accuracy of our proposed method.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125292108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bashar Tahayna, M. Belkhatir, S. Alhashmi, T. O'Daniel
{"title":"Optimizing support vector machine based classification and retrieval of semantic video events with genetic algorithms","authors":"Bashar Tahayna, M. Belkhatir, S. Alhashmi, T. O'Daniel","doi":"10.1109/ICIP.2010.5653724","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5653724","url":null,"abstract":"Building accurate models for video event classification is an important research issue since they are essential components for effective video indexing and retrieval. Recently kernel-based methods, particularly support vector machines, have become popular in multimedia classification tasks. However, in order to use them effectively, several factors that hinder accurate classification results, such as feature subset selection and selection of the SVM kernel parameters, must be addressed through the use of heuristic-based techniques. We present a new approach to enhance the performance of SVM for video events classification based on a search method. The latter relies on the simultaneous optimization of the feature and instance subset and SVM kernel parameters, with genetic algorithms. Classification results on sport videos show the significant improvement over conventional SVM.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125432020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oben M. Tataw, Min Liu, Amit Roy-Chowdhurry, R. K. Yadav, G. Reddy
{"title":"Pattern analysis of stem cell growth dynamics in the shoot apex of arabidopsis","authors":"Oben M. Tataw, Min Liu, Amit Roy-Chowdhurry, R. K. Yadav, G. Reddy","doi":"10.1109/ICIP.2010.5652018","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5652018","url":null,"abstract":"The Shoot Apical Meristem (SAM) is made of stem cells that are responsible for all above ground plant structures. Differentiating cells in the development of SAM form primordia. Primordia develop to become various plant organs. Understanding the growth dynamics of primordia is critical to understanding the developmental dynamics of the entire SAM. We present a method for performing quantitative analysis of primordia development in model plant Arabidopsis thaliana. A contour based approach is used to detect and isolate individual primordia from 3D live imaging data. Regions of growth are detected by analyzing eigenvalues of curvature covariance matrices. After primordia detection and isolation, a Dynamic Time Warping (DTW) Algorithm is applied to compute the rate of growth. Results show the successful use of our method to quantitatively analyze primordial growth.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125589378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Side-information-adaptive distributed source coding","authors":"D. Varodayan, B. Girod","doi":"10.1109/ICIP.2010.5653404","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5653404","url":null,"abstract":"Consider distributed source coding in which each block of the source at the encoder is associated with multiple candidates for side information at the decoder, just one of which is statistically dependent on the source block. Our encoder codes the source as syndrome bits and also sends a portion of it uncoded as doping bits. The decoder adaptively discovers the best side information candidates for each block of the source. The main contribution is a method based on density evolution to analyze and design the coding performance. Experimental results show that the density evolution technique is accurate in modeling the codec and optimizing its doping rate.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126949311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ting-Yueh Jeng, Bi Song, E. Staudt, Min Liu, A. Roy-Chowdhury, A. SenGupta
{"title":"Multi-target tracking using long-term stochastic associations","authors":"Ting-Yueh Jeng, Bi Song, E. Staudt, Min Liu, A. Roy-Chowdhury, A. SenGupta","doi":"10.1109/ICIP.2010.5651303","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5651303","url":null,"abstract":"Maintaining the stability of tracks on multiple targets in video over extended time periods remains a challenging problem. A few methods which have recently shown encouraging results in this direction rely on learning context models or the availability of training data. However, this may not be feasible in many application scenarios. Moreover, tracking methods should be able to work across multiple resolutions of the video. In this paper, we consider the problem of long-term tracking in video in application domains where context information is not available a priori, nor can it be learned online. We build our solution on the hypothesis that most existing trackers can obtain reasonable short-term tracks (tracklets). By analyzing the statistical properties of these tracklets, we develop associations between them so as to come up with longer tracks. On multiple real-life video sequences spanning low and high resolution data, we show the ability to accurately track over extended time periods.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115091025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adaptive shape prior in graph cut segmentation","authors":"Maddy Hui Wang, Hong Zhang","doi":"10.1109/ICIP.2010.5653335","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5653335","url":null,"abstract":"In this paper, we propose a novel method to adaptively apply shape prior in graph cut segmentation. By incorporating shape priors in an adaptive way, we introduce a robust way to harness shape prior in graph cut segmentation. Since traditional graph cut approaches with shape prior may fail in cases where parameters for shape prior term are not set appropriately, incorporation of shape priors adaptively within this framework mitigates these problems. To address this issue, we propose to adaptively apply shape prior based on a shape probability map, defined to reflect the need of shape prior at each location of an image. We show that the proposed method can be easily applied to existing algorithms of graph cut segmentation with shape prior, such as level set based shape prior method, and star shape prior graph cut. We validate our method in various types of images corrupted by significant noise and intensity inhomogeneities. Convincing results are obtained.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115179318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Collision-detection based rate-adaptation for video multicasting over IEEE 802.11 wireless networks","authors":"Chao Zhou, Xinggong Zhang, Lichuan Lu, Zongming Guo","doi":"10.1109/ICIP.2010.5652665","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5652665","url":null,"abstract":"Wireless video multicasting/broadcasting is an efficient method for simultaneous transmission of data to a group of users. But the multicasting rates are fixed in current IEEE 802.11 PHYs standard. In this paper, we propose a novel collision-detection based rate-adaptation scheme (CDRA), which fully exploits the potential of rate adaptation capability of wireless physical layer, to improve service qualities of video multicasting. The received signal strength indication (RSSI) and packet error ratio (PER) are comprehensively used to detect collision. The PER-guided rate adjustment algorithm is performed when no collision happens. Otherwise the collision-avoid mechanism works. By detecting the collision, our scheme could adaptively select the maximum data rates for video multicasting. We construct a practical multicasting test-bed in IEEE 802.11b network and carry out extensive experiments. The results show that CDRA achieves throughput gain up to 166% and PSNR gain to 139% compared with existing methods.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115594424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Eye tracking based perceptual image inpainting quality analysis","authors":"M. Venkatesh, S. Cheung","doi":"10.1109/ICIP.2010.5653640","DOIUrl":"https://doi.org/10.1109/ICIP.2010.5653640","url":null,"abstract":"The objective of image inpainting is to perform a seamless completion of missing areas in images. Evaluating the perceptual quality of an inpainting algorithm must rely on features of the Human Visual System. Using eye-tracking experiments, we show that there is a strong correlation between inpainting quality and visual attention. By comparing gaze densities within and outside the hole regions of inpainted images, we show that discernible artifacts due to inpainting attract an unusual amount of visual attention. The gaze density within the hole, normalized with the gaze density of the same region from the unmodified image, provides a useful measure in comparing different inpainting processes and corroborates well with subjective rankings.","PeriodicalId":228308,"journal":{"name":"2010 IEEE International Conference on Image Processing","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116116162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}