{"title":"Identifying regions of interest for discriminating Alzheimer's disease from mild cognitive impairment","authors":"Helena Aidos, J. Duarte, A. Fred","doi":"10.1109/ICIP.2014.7025003","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025003","url":null,"abstract":"Alzheimer's disease (AD) is one of the most common types of dementia that affects elderly people, with no known cure. Early diagnosis of this disease is very important to improve patients' life quality and slow down the disease progression. Over the years, researchers have been proposing several techniques to analyze brain images, like FDG-PET, to automatically find changes in the brain activity. This paper compares regions of voxels identified by an expert with regions of voxels found automatically, in terms of corresponding classification accuracies based on three well-known classifiers. The automatic identification of regions is made by segmenting FDG-PET images, and extracting features that represent each of those regions. Experimental results show that the regions found automatically are very discriminative, outperforming results with expert's defined regions.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"30 1","pages":"21-25"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80858724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Single image dehazing based on fast wavelet transform with weighted image fusion","authors":"H. Zhang, Xuan Liu, Zhitong Huang, Yuefeng Ji","doi":"10.1109/ICIP.2014.7025921","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025921","url":null,"abstract":"Due to the presence of bad weather conditions, images captured in outdoor environments are usually degraded. In this paper, a novel single image dehazing method is proposed to enhance the visibility of such degraded images. Since the property of haze is widely spread, the estimated transmission should be smoothly changed over the scene. The fast wavelet transform (FWT) is introduced to estimate the smooth transmission in our work. To preserve more details and correct the color distortion, a solution based on weighted image fusion strategy is provided. Compared with the state-of-the-art single image dehazing methods, our method based on FWT with weighted image fusion (FWTWIF) produces similar or even better results with lower complexity. In order to verify the high visibility restoration and efficiency of our method, comparative experiments are conducted at the end of this paper.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"49 1","pages":"4542-4546"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80872699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Action recognition based on kinematic representation of video data","authors":"Xin Sun, Di Huang, Yunhong Wang, Jie Qin","doi":"10.1109/ICIP.2014.7025306","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025306","url":null,"abstract":"The local space-time feature is an effective way to represent video data and achieves state-of-the-art performance in action recognition. However, in majority of cases, it only captures the static or dynamic cues of the image sequence. In this paper, we propose a novel kinematic descriptor, namely Static and Dynamic fEature Velocity (SDEV), which models the changes of both static and dynamic information with time for action recognition. It is not only discriminative itself, but also complementary to the existing descriptors, thus leading to more comprehensive representation of actions by their combination. Evaluated on two public databases, i.e. UCF sports and Olympic Sports, the results clearly illustrate the competency of SDEV.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"86 1","pages":"1530-1534"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81227523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Image segmentation by image foresting transform with geodesic band constraints","authors":"Caio de Moraes Braz, P. A. Miranda","doi":"10.1109/ICIP.2014.7025880","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025880","url":null,"abstract":"In this work, we propose a novel boundary constraint, which we denote as the Geodesic Band Constraint (GBC), and we show how it can be efficiently incorporated into a subclass of the Generalized Graph Cut framework (GGC). We include a proof of the optimality of the new algorithm in terms of a global minimum of an energy function subject to the new boundary constraints. The Geodesic Band Constraint helps regularizing the boundary, and consequently, improves the segmentation of objects with more regular shape, while keeping the low computational cost of the Image Foresting Transform (IFT). It can also be combined with the Geodesic Star Convexity prior, and with polarity constraints, at no additional cost. The method is demonstrated in CT thoracic studies of the liver, and MR images of the breast.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"20 1","pages":"4333-4337"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79522999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Screen-camera calibration using a thread","authors":"Songnan Li, K. Ngan, Lu Sheng","doi":"10.1109/ICIP.2014.7025698","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025698","url":null,"abstract":"In this paper, we propose a novel screen-camera calibration algorithm which aims to locate the position of the screen in the camera coordinate system. The difficulty comes from the fact that the screen is not directly visible to the camera. Rather than using an external camera or a portable mirror like in previous studies, we propose to use a more accessible and cheaper calibrating object, i.e., a thread. The thread is manipulated so that our algorithm can infer the perspective projections of the four screen corners on the image plane. The 3-dimentional (3D) position of each screen corner is then determined by minimizing the sum of squared projection errors. Experiments show that compared with the previous studies our method can generate similar calibration results without the additional hardware.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"110 1","pages":"3435-3439"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80799446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Bandwidth efficient mobile cloud gaming with layered coding and scalable phong lighting","authors":"Seong-Ping Chuah, Ngai-Man Cheung","doi":"10.1109/ICIP.2014.7026212","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026212","url":null,"abstract":"In mobile cloud gaming, one of the main challenges is to deliver high-quality game images over wireless networks under stringent delay requirement. To reduce the bit-rate of game images, we propose Layered Coding, which leverages the graphic rendering capability of modern mobile devices to reduce transmission bit-rate. Specifically, we render a low-quality local game image, or the base layer, on the power-constrained mobile client. Instead of sending the high quality game image, the cloud server sends enhancement layer information, which the client utilizes to improve the quality of the base layer. Central to the proposed layered coding is the design of base layer (BL) rendering. We discuss BL design and propose a computationally-scalable Phong lighting that can be used in BL rendering. We performed experiments to compare our layered coding with state-of-the-art, which uses H.264/AVC inter-frame coding to compress game images. With game sequences of different model complexity and motion, our results suggest that layered coding requires substantially lower data-rate. We made available game video test sequences to stimulate future research.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"28 1","pages":"6006-6010"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85381535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A meta-algorithm for classification by feature nomination","authors":"Rituparna Sarkar, K. Skadron, S. Acton","doi":"10.1109/ICIP.2014.7026050","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026050","url":null,"abstract":"With increasing complexity of the dataset it becomes impractical to use a single feature to characterize all constituent images. In this paper we describe a method that will automatically select the appropriate image features that are relevant and efficacious for classification, without requiring modifications to the feature extracting methods or the classification algorithm. We first describe a method for designing class distinctive dictionaries using a dictionary learning technique, which yields class specific sparse codes and a linear classifier parameter. Then, we apply information theoretic measures to obtain the more informative feature relevant to a test image and use only that feature to obtain final classification results. With at least one of the features classifying the query accurately, our algorithm chooses the correct feature in 88.9% of the trials.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"67 1","pages":"5187-5191"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84055350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Edge enhancement of depth based rendered images","authors":"M. S. Farid, M. Lucenteforte, Marco Grangetto","doi":"10.1109/ICIP.2014.7026103","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026103","url":null,"abstract":"Depth image based rendering is a well-known technology for the generation of virtual views in between a limited set of views acquired by a cameras array. Intermediate views are rendered by warping image pixels based on their depth. Nonetheless, depth maps are usually imperfect as they need to be estimated through stereo matching algorithms; moreover, for representation and transmission requirements depth values are obviously quantized. Such depth representation errors translate into a warping error when generating intermediate views thus impacting on the rendered image quality. We observe that depth errors turn to be very critical when they affect the object contours since in such a case they cause significant structural distortion in the warped objects. This paper presents an algorithm to improve the visual quality of the synthesized views by enforcing the shape of the edges in presence of erroneous depth estimates. We show that it is possible to significantly improve the visual quality of the interpolated view by enforcing prior knowledge on the admissible deformations of edges under projective transformation. Both visual and objective results show that the proposed approach is very effective.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"21 1","pages":"5452-5456"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84593157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Nam, J. Mantell, Lorna Hodgson, D. Bull, P. Verkade, A. Achim
{"title":"Feature-based registration for correlative light and electron microscopy images","authors":"D. Nam, J. Mantell, Lorna Hodgson, D. Bull, P. Verkade, A. Achim","doi":"10.1109/ICIP.2014.7025724","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025724","url":null,"abstract":"In this paper we present a feature-based registration algorithm for largely misaligned bright-field light microscopy images and transmission electron microscopy images. We first detect cell centroids, using a gradient-based single-pass voting algorithm. Images are then aligned by finding the flip, translation and rotation parameters, which maximizes the overlap between pseudo-cell-centers. We demonstrate the effectiveness of our method, by comparing it to manually aligned images. Combining registered light and electron microscopy images together can reveal details about cellular structure with spatial and high-resolution information.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"77 1","pages":"3567-3571"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77504645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Joint sparsity-based robust visual tracking","authors":"B. Bozorgtabar, Roland Göcke","doi":"10.1109/ICIP.2014.7025998","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025998","url":null,"abstract":"In this paper, we propose a new object tracking in a particle filter framework utilising a joint sparsity-based model. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":"20 1","pages":"4927-4931"},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78057318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}