Marzieh Mokhtari, H. Rabbani, Alireza Mehri-Dehnavi
{"title":"Alignment of optic nerve head optical coherence tomography B-scans in right and left eyes","authors":"Marzieh Mokhtari, H. Rabbani, Alireza Mehri-Dehnavi","doi":"10.1109/ICIP.2016.7532780","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532780","url":null,"abstract":"Symmetry analysis of right and left eyes can be a useful tool for early detection of eye diseases. In this study, we want to compare the Optical Coherent Tomography (OCT) images captured from optic nerve head (ONH) of right and left eyes. To do this, it is necessary to align the OCT data and compare equivalent B-scans in right and left eyes. For this reason, since the fovea-ONH axes in OCT data are not available due to small field of view in OCT, at first the projection of OCT data of each eye is registered to its corresponding fundus image using extracted vessels by Hessian analysis of directional curvelet subbands. Then, by alignment of fundus images of right and left eyes according to their automatically detected fovea-ONH axes, OCT projections are also aligned. After alignment of OCT projections, aligned B-scans are estimated and used for comparing different parameters such as cup-to-disk ratio (CDR). Using aligned B-scans, two signals of CDRs are obtained from two eyes which each point in these signals corresponds to CDR in a specific part of ONH, i.e., a point-to-point comparison between CDRs of right and left eyes is provided which has potential to lead to a new imaging biomarker for eye disease detection.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117264454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tumor segmentation by fusion of MRI images using copula based statistical methods","authors":"J. Lapuyade-Lahorgue, S. Ruan, Hua Li, P. Vera","doi":"10.1109/ICIP.2016.7533138","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533138","url":null,"abstract":"","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123975605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jie Lin, Olivier Morère, V. Chandrasekhar, A. Veillard, Hanlin Goh
{"title":"Co-sparsity regularized deep hashing for image instance retrieval","authors":"Jie Lin, Olivier Morère, V. Chandrasekhar, A. Veillard, Hanlin Goh","doi":"10.1109/ICIP.2016.7532799","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532799","url":null,"abstract":"In this work, we tackle the problem of image instance retrieval with binary descriptors hashed from high-dimensional image representations. We present three main contributions: First, we propose Co-sparsity Regularized Hashing (CRH) to explicitly optimize the distribution of generated binary hash codes, which is formulated by adding a co-sparsity regularization term into the Restricted Boltzmann Machines (RBM) based hashing model. CRH is capable of balancing the variance of hash codes per image as well as the variance of each hash bit across images, resulting in maximum discriminability of hash codes that can effectively distinguish images at very low rates (down to 64 bits). Second, we extend the CRH into deep network structure by stacking multiple co-sparsity constrained RBMs, leading to further performance improvement. Finally, through a rigorous evaluation, we show that our model outperforms state-of-the-art at low rates (from 64 to 256 bits) across various datasets, regardless of the type of image representations used.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131034188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Scale-constrained unsupervised evaluation method for multi-scale image segmentation","authors":"Yuhang Lu, Youchuan Wan, Gang Li","doi":"10.1109/ICIP.2016.7532821","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532821","url":null,"abstract":"Unsupervised evaluation of segmentation quality is a crucial step in image segmentation applications. Previous unsupervised evaluation methods usually lacked the adaptability to multi-scale segmentation. A scale-constrained evaluation method that evaluates segmentation quality according to the specified target scale is proposed in this paper. First, regional saliency and merging cost are employed to describe intra-region homogeneity and inter-region heterogeneity, respectively. Subsequently, both of them are standardized into equivalent spectral distances of a predefined region. Finally, by analyzing the relationship between image characteristics and segmentation quality, we establish the evaluation model. Experimental results show that the proposed method outperforms four commonly used unsupervised methods in multi-scale evaluation tasks.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133740631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multiscale Nakagami parametric imaging for improved liver tumor localization","authors":"Omar Sultan Al-Kadi","doi":"10.1109/ICIP.2016.7532987","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532987","url":null,"abstract":"Effective ultrasound tissue characterization is usually hindered by complex tissue structures. The interlacing of speckle patterns complicates the correct estimation of backscatter distribution parameters. Nakagami parametric imaging based on localized shape parameter mapping can model different backscattering conditions. However, performance of the constructed Nakagami image depends on the sensitivity of the estimation method to the backscattered statistics and scale of analysis. Using a fixed focal region of interest in estimating the Nakagami parametric image would increase estimation variance. In this work, localized Nakagami parameters are estimated adaptively by means of maximum likelihood estimation on a multiscale basis. The varying size kernel integrates the goodness-of-fit of the backscattering distribution parameters at multiple scales for more stable parameter estimation. Results show improved quantitative visualization of changes in tissue specular reflections, suggesting a potential approach for improving tumor localization in low contrast ultrasound images.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115522142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multicolor removal based on color lines for SFS","authors":"Tianqi Wang, T. Aoki","doi":"10.1109/ICIP.2016.7533113","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533113","url":null,"abstract":"","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132045700","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sea-land segmentation via hierarchical region merging and edge directed graph cut","authors":"D. Cheng, Gaofeng Meng, Chunhong Pan","doi":"10.1109/ICIP.2016.7532563","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7532563","url":null,"abstract":"Separating an optical remote sensing image into sea and land areas is very challenging yet of great importance to the coastline extraction and subsequent object detection. In this paper, we propose a hierarchical region merging approach to automatically extract the sea area and employ edge directed graph cut (GC) to accomplish the final segmentation. Firstly, an image is segmented into superpixels and a graph-based merging method is employed to extract the maximum area of sea region (MASR). Then the non-connected sea regions are identified by measuring the distance between their superpixels and the MASR. When modelling the pairwise term in GC, we incorporate edge information between neighboring superpixels to reduce under--segmentation. Experimental results on a set of challenging images demonstrate the effectiveness of our method by comparing it with the state-of-the-art approaches.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116638545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huxiang Gu, Leibo Joel, Anselmi Fabio, Chunhong Pan, T. Poggio
{"title":"Invariant representation for blur and down-sampling transformations","authors":"Huxiang Gu, Leibo Joel, Anselmi Fabio, Chunhong Pan, T. Poggio","doi":"10.1109/ICIP.2016.7533029","DOIUrl":"https://doi.org/10.1109/ICIP.2016.7533029","url":null,"abstract":"Invariant representations of images can significantly reduce the sample complexity of a classifier performing object identification or categorization as shown in a recent analysis of invariant representations for object recognition. In the case of geometric transformations of images the theory [1] shows how invariant signatures can be learned in a biologically plausible way from unsupervised observations of the transformations of a set of randomly chosen template images. Here we extend the theory to non-geometric transformations such as blur and down-sampling. The proposed algorithm achieve an invariant representation via two simple biologically-plausible steps: 1. compute normalized dot products of the input with the stored transformations of each template, and 2. for each template compute the statistics of the resulting set of values such as the histogram or moments. The performance of our system on challenging blurred and low resolution face matching tasks exceeds the previous state-of-the-art by a large margin which grows with increasing image corruption.","PeriodicalId":147245,"journal":{"name":"International Conference on Information Photonics","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127739777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}