Jie Wang, Rituparna Sarkar, A. Aziz, Andrea Vaccari, Andreas Gahlmann, S. Acton
{"title":"Bact-3D: A level set segmentation approach for dense multi-layered 3D bacterial biofilms","authors":"Jie Wang, Rituparna Sarkar, A. Aziz, Andrea Vaccari, Andreas Gahlmann, S. Acton","doi":"10.1109/ICIP.2017.8296297","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296297","url":null,"abstract":"In microscopy, new super-resolution methods are emerging that produce three-dimensional images at resolutions ten times smaller than that provided by traditional light microscopy. Such technology is enabling the exploration of structure and function in living tissues such as bacterial biofilms that have mysterious interconnections and organization. Unfortunately, the standard tools used in the image analysis community to perform segmentation and other higher-level analyses cannot be applied naïvely to these data. This paper presents Bact-3D, a 3D method for segmenting super-resolution images of multi-leveled, living bacteria cultured in vitro. The method incorporates a novel initialization approach that exploits the geometry of the bacterial cells as well an iterative local level set evolution that is tailored to the biological application. In experiments where segmentation is used as precursor to cell detection, the Bact-3D matches or improves upon the Dice score and mean-squared error of two existing methods, while yielding a substantial improvement in cell detection accuracy. In addition to providing improvements in performance over the state-of-the-art, this report also characterizes the tradeoff between imaging resolution and segmentation quality.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114741246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matt Bender, Xu Yang, Hui Chen, A. Kurdila, R. Müller
{"title":"Gaussian process dynamic modeling of bat flapping flight","authors":"Matt Bender, Xu Yang, Hui Chen, A. Kurdila, R. Müller","doi":"10.1109/ICIP.2017.8297142","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297142","url":null,"abstract":"The flapping flight of bats can serve as an inspiration for flapping-wing air vehicles. Obtaining an understanding of bat flight requires detailed, occlusion-free kinematics data that can only be collected using large numbers of cameras. Here, we have explored the use of low-cost cameras with low frame rates that result in nonlinear, large-baseline motions in image space. To create a better model for predicting the motion of features under these circumstances, we have applied Gaussian Process Dynamic Modeling (GPDM) to manually digitized flight data in order to learn a lower dimensional manifold near which the motion evolves. The primary contribution of this work is the first nonlinear dimensionality reduction for the representation of bat flight.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123956304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Addesso, M. Mura, Laurent Condat, R. Restaino, G. Vivone, Daniele Picone, J. Chanussot
{"title":"Hyperspectral image inpainting based on collaborative total variation","authors":"P. Addesso, M. Mura, Laurent Condat, R. Restaino, G. Vivone, Daniele Picone, J. Chanussot","doi":"10.1109/ICIP.2017.8297090","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297090","url":null,"abstract":"Inpainting in hyperspectral imagery is a challenging research area and several methods have been recently developed to deal with this kind of data. In this paper we address missing data restoration via a convex optimization technique with regularization term based on Collaborative Total Variation (CTV). In particular we evaluate the effectiveness of several instances of CTV in conjunction with different dimensionality reduction algorithms.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134016482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Weighted median-shift on graphs for geometric model fitting","authors":"Xiong Zhou, Hanzi Wang, Guobao Xiao, Xing Wang, Yan Yan, Liming Zhang","doi":"10.1109/ICIP.2017.8296342","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296342","url":null,"abstract":"In this paper, we deal with geometric model fitting problems on graphs, where each vertex represents a model hypothesis, and each edge represents the similarity between two model hypotheses. Conventional median-shift methods are very efficient and they can automatically estimate the number of clusters. However, they assign the same weighting scores to all vertices of a graph, which can not show the discriminability on different vertices. Therefore, we propose a novel weighted median-shift on graphs method (WMSG) to fit and segment multiple-structure data. Specifically, we assign a weighting score to each vertex according to the distribution of the corresponding inliers. After that, we shift vertices towards the weighted median vertices iteratively to detect modes. The proposed method can adaptively estimate the number of model instances and deal with data contaminated with a large number of outliers. Experimental results on both synthetic data and real images show the advantages of the proposed method over several state-of-the-art model fitting methods.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115439475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent detail enhancement for differently exposed images","authors":"F. Kou, Weihai Chen, Xingming Wu, Zhengguo Li","doi":"10.1109/ICIP.2017.8296870","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296870","url":null,"abstract":"Multi-scale exposure fusion is a fast approach to fuse several differently exposed images captured at the same high dynamic range (HDR) scene into a high quality low dynamic range (LDR) image. The fused image is expected to include all details of the input images, however, the details in the brightest and darkest regions are usually not preserved well. Adding details that are extracted from the input images to the fused image is an efficient approach to overcome the problem. In this paper, a fast selectively detail enhancement algorithm is proposed to extract the details in the brightest and darkest regions of the HDR scene and add the extracted details to the fused image. Experimental results show that the proposed algorithm can enhance the details of the fused image much faster than the existing algorithms with comparable or even better visual quality.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117261923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ying Yang, Jing Zhang, Jihong Liu, Jiafeng Li, L. Zhuo
{"title":"Tag tree creation of social image for personalized recommendation","authors":"Ying Yang, Jing Zhang, Jihong Liu, Jiafeng Li, L. Zhuo","doi":"10.1109/ICIP.2017.8296665","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296665","url":null,"abstract":"The tags are usually tagged by different users in social image sharing websites, which can indicate image semantic information and imply user's preference. Therefore, the tags can contribute to personalized recommendation of social image. However, the present social image tags models only consider single tag, resulting in the relationships among tags are ignored. In this paper, we propose a novel method to create tag tree of social image for personalized recommendation. Firstly, the tag ranking is realized to remove noisy tags. Then, the first layer tags are selected from re-ranked tags lists. To sufficiently express tag's significances, the tag subtrees can be created based on different image categories and combined with first layer tags to create tag tree. Finally, the personalized recommendation of social image is achieved by using tag tree. Experimental results show that our tag tree can effectively express the relationships among tags as well as obtain satisfactory results in personalized recommendation of social image.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"346 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120958323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hyun-Gyu Lee, Adiba Orzikulova, Bo-Gyu Park, Sang-chul Lee
{"title":"Modeling structural dissimilarity based on shape embodiment for cell segmentation","authors":"Hyun-Gyu Lee, Adiba Orzikulova, Bo-Gyu Park, Sang-chul Lee","doi":"10.1109/ICIP.2017.8297002","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297002","url":null,"abstract":"Accurate cell segmentation is one of the critical, yet challenging problems in microscopy images due to ambiguous boundaries as well as a wide variation of shapes and sizes of cells. Even though a number of existing methods have achieved decent results for cell segmentation, boundary vagueness between adjoining cells tended to cause generation of perceptually inaccurate segmentation of stained nuclei. We propose a segmentation method of cells based on structural dissimilarity between embodied and imaged cells. From assumption that the shape of the region of adjoining cells follows a 2D Gaussian mixture model, the cell region is divided by an expectation-maximization method. The lowest structural dissimilarity using embodied cells decides on the number of components of the 2D Gaussian mixture model. The region of interest is extracted by implementation of both global and local thresholdings, which performs binarization of the local image with a seed at the center, where the seed is obtained by the maximally stable extremal regions. Our approach presented considerably higher evaluation scores compared with other five existing methods in terms of both accuracies of region of interest (ROI) detection and boundary discrimination.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"671 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116184357","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, Rob Liston
{"title":"Training sample selection for deep learning of distributed data","authors":"Zheng Jiang, Xiaoqing Zhu, Wai-tian Tan, Rob Liston","doi":"10.1109/ICIP.2017.8296670","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296670","url":null,"abstract":"The success of deep learning — in the form of multi-layer neural networks — depends critically on the volume and variety of training data. Its potential is greatly compromised when training data originate in a geographically distributed manner and are subject to bandwidth constraints. This paper presents a data sampling approach to deep learning, by carefully discriminating locally available training samples based on their relative importance. Towards this end, we propose two metrics for prioritizing candidate training samples as functions of their test trial outcome: correctness and confidence. Bandwidth-constrained simulations show significant performance gain of our proposed training sample selection schemes over convention uniform sampling: up to 15× bandwidth reduction for the MNIST dataset and 25% reduction in learning time for the CIFAR-10 dataset.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114845205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correlation-based deblurring leveraging multispectral chromatic aberration in color and near-infrared joint acquisition","authors":"Majed El Helou, Zahra Sadeghipoor, S. Süsstrunk","doi":"10.1109/ICIP.2017.8296512","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296512","url":null,"abstract":"Joint acquisition of color and near-infrared (NIR) images is of growing interest due to various applications that make use of the additional spectral information. An obstacle to this acquisition is the wavelength-dependent blurring caused by the chromatic aberration of optical lenses. When one of the spectral channels, for example the green channel, is in focus on the sensor plane, the images of the other channels, especially NIR, are blurred. This paper presents a study of spectral-spatial correlations between color and NIR channels and proposes a method to correct for chromatic aberrations. The algorithm we introduce leverages axial chromatic aberration to deblur the NIR image when the color image is in focus. The proposed technique improves image sharpness by 48.8% on average compared to state-of-the-art results. Moreover, our method generates an NIR image that has a larger depth-of-field compared to an NIR image originally captured in focus.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124449143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Realistic image composite with best-buddy prior of natural image patches","authors":"Y. Wang, Fan Zhong, Xiangyu Sun, Xueying Qin","doi":"10.1109/ICIP.2017.8296687","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296687","url":null,"abstract":"Realistic image composite requires the appearance of foreground and background layers to be consistent. This is difficult to achieve because the foreground and the background may be taken from very different environments. This paper proposes a novel composite adjustment method that can harmonize appearance of different composite layers. We introduce the Best-Buddy Prior (BBP), which is a novel compact representations of the joint co-occurrence distribution of natural image patches. BBP can be learned from unlabelled images given only the unsupervised regional segmentation. The most-probable adjustment of foreground can be estimated efficiently in the BBP space as the shift vector to the local maximum of density function. Both qualitative and quantitative evaluations show that our method outperforms previous composite adjustment methods.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127930026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}