{"title":"Robust segmentation of corneal fibers from noisy images","authors":"Jia Chen, J. Jester, M. Gopi","doi":"10.1145/3009977.3010051","DOIUrl":"https://doi.org/10.1145/3009977.3010051","url":null,"abstract":"Corneal collagen structure, which plays an important role in determining visual acuity, has drawn a lot of research attention to exploring its geometric properties. Advancement of nonlinear optical (NLO) imaging provides a potential way for capturing fiber-level structure of cornea, however, the artifacts introduced by the NLO imaging process make image segmentation on such images a bottleneck for further analysis. Especially, the existing methods fail to preserve the branching points which are important for mechanical analysis. In this paper, we propose a hybrid image segmentation method, which integrates seeded region growing and iterative voting. Results show that our algorithm outperforms state-of-the-art techniques in segmenting fibers from background while preserving branching points. Finally, we show that, based on the segmentation result, branching points and the width of fibers can be determined more accurately than the other methods, which is critical for mechanical analysis on corneal structure.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"128 1","pages":"58:1-58:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82784025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust pedestrian tracking using improved tracking-learning-detection algorithm","authors":"Ritika Verma, I. Sreedevi","doi":"10.1145/3009977.3009999","DOIUrl":"https://doi.org/10.1145/3009977.3009999","url":null,"abstract":"Manual analysis of pedestrians for surveillance of large crowds in real time applications is not practical. Tracking-Learning-Detection suggested by Kalal, Mikolajczyk and Matas [1] is one of the most prominent automatic object tracking system. TLD can track single object and can handle occlusion and appearance change but it suffers from limitations. In this paper, tracking of multiple objects and estimation of their trajectory is suggested using improved TLD. Feature tracking is suggested in place of grid based tracking to solve the limitation of tracking during out of plane rotation. This also leads to optimization of algorithm. Proposed algorithm also achieves auto-initialization with detection of pedestrians in the first frame which makes it suitable for real time pedestrian tracking.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"08 1","pages":"35:1-35:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85950954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A stratified registration framework for DSA artifact reduction using random walker","authors":"Manivannan Sundarapandian, K. Ramakrishnan","doi":"10.1145/3009977.3010066","DOIUrl":"https://doi.org/10.1145/3009977.3010066","url":null,"abstract":"In Digital Subtraction Angiography (DSA), non-rigid registration of the mask and contrast images to reduce the motion artifacts is a challenging problem. In this paper, we have proposed a novel stratified registration framework for DSA artifact reduction. We use quad-trees to generate the non-uniform grid of control points and obtain the sub-pixel displacement offsets using Random Walker (RW). We have also proposed a sequencing logic for the control points and an incremental LU decomposition approach that enables reuse of the computations in the RW step. We have tested our approach using clinical data sets, and found that our registration framework has performed comparable to the graph-cuts (at the same partition level), in regions wherein 95% artifact reduction was achieved. The optimization step achieves a speed improvement of 4.2 times with respect to graph-cuts.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"8 1","pages":"68:1-68:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85034710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iris recognition using partial sum of second order Taylor series expansion","authors":"B. H. Shekar, S. S. Bhat","doi":"10.1145/3009977.3010065","DOIUrl":"https://doi.org/10.1145/3009977.3010065","url":null,"abstract":"Iris is presently one among the most sought after traits in biometric research. Extracting well-suited features from iris has been a favourite topic of the researchers. This paper proposes a novel iris feature extraction technique based on partial sum of second order Taylor series expansion (TSE). The finite sum of TSE computed on an arbitrary small neighbourhood on multiple scales can approximate the function extremely well and hence provides a powerful mechanism to extract the complex natured localised features of iris structure. To compute the higher order derivatives of TSE, we propose kernel structures by extending the Sobel operators. Extensive experiments are conducted with multiple scales on IITD, MMU v-2 and CASIA v-4 distance databases and comparative analysis is performed with the existing algorithms to substantiate the performance of the proposed method.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"11 1","pages":"81:1-81:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82900181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast frontier detection in indoor environment for monocular SLAM","authors":"Sarthak Upadhyay, K. Krishna, S. Kumar","doi":"10.1145/3009977.3010063","DOIUrl":"https://doi.org/10.1145/3009977.3010063","url":null,"abstract":"Frontier detection is a critical component in autonomous exploration, wherein the robot decides the next best location to move in order to continue its mapping process. The existing frontier detection methods require dense reconstruction which is difficult to attain in a poorly textured indoor environment using a monocular camera. In this effort, we present an alternate method of detecting frontiers during the course of robot motion that circumvents the requirement of dense mapping. Based on the observation that frontiers typically occur around areas with sudden change in texture (zero-crossings), we propose a novel linear chain Conditional Random Field(CRF) formulation that is able to detect the presence or absence of frontier regions around such areas. We use cues like spread of 3D points and scene change around these areas as an observation to CRF. We demonstrate that this method gives us more relevant frontiers compared to other monocular camera based methods in the literature. Finally, we present results in an indoor environment, wherein frontiers are reliably detected around walls leading to new corridors, doors leading to new rooms or corridors and tables and other objects that open up to a new space in rooms.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"75 1","pages":"39:1-39:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83794189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An image analysis approach for transcription of music played on keyboard-like instruments","authors":"Souvik Deb, Ajit V. Rajwade","doi":"10.1145/3009977.3010007","DOIUrl":"https://doi.org/10.1145/3009977.3010007","url":null,"abstract":"Music transcription refers to the process of analyzing a piece of music to generate a sequence of constituent notes and their duration. Transcription of music from audio signals is fraught with problems due to auditory interference such as ambient noise, multiple instruments playing simultaneously, accompanying vocals or polyphonic sounds. For several instruments, there exists added information for music transcription which can be derived from a video sequence of the instrument as it is being played. This paper proposes a method to utilize this visual information for the case of keyboard-like instruments to generate a transcript automatically, by analyzing the video frames. We present encouraging results under varying lighting conditions on different song sequences played out on a keyboard.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"41 1","pages":"5:1-5:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80556385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju
{"title":"Mosaicing deep underwater imagery","authors":"Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju","doi":"10.1145/3009977.3010029","DOIUrl":"https://doi.org/10.1145/3009977.3010029","url":null,"abstract":"Numerous sources of distortions render mosaicing of underwater (UW) images an immensely challenging effort. Methods that can process conventional photographs (terrestrial/aerial) fail to deliver the desired results on UW images. Taking the sources of underwater degradations into account is central to ensuring quality performance. The work described in this paper specifically deals with the problem of mosaicing deep UW images captured by Remotely Operated Vehicles (ROVs). These images are mainly degraded by haze, color changes, and non-uniform illumination. We propose a framework that restores these images in accordance with a suitably derived degradation model. Furthermore, our scheme harnesses the scene geometry information present in each image to aid in constructing a mosaic that is free from artifacts such as local blurring, ghosting, double contouring and visible seams. Several experiments on real underwater images sequences have been carried out to demonstrate the performance of our mosaicing pipeline along with comparisons.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"33 1","pages":"74:1-74:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam
{"title":"Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images","authors":"G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam","doi":"10.1145/3009977.3010024","DOIUrl":"https://doi.org/10.1145/3009977.3010024","url":null,"abstract":"Malaria is a deadly infectious disease affecting red blood cells in humans due to the protozoan of type Plasmodium. In 2015, there is an estimated death toll of 438, 000 patients out of the total 214 million malaria cases reported world-wide. Thus, building an accurate automatic system for detecting the malarial cases is beneficial and has huge medical value. This paper addresses the detection of Plasmodium Falciparum infected RBCs from Leishman's stained microscope slide images. Unlike the traditional way of examining a single focused image to detect the parasite, we make use of a focus stack of images collected using a bright field microscope. Rather than the conventional way of extracting the specific features we opt for using Convolutional Neural Network that can directly operate on images bypassing the need for hand-engineered features. We work with image patches at the suspected parasite location there by avoiding the need for cell segmentation. We experiment, report and compare the detection rate received when only a single focused image is used and when operated on the focus stack of images. Altogether the proposed novel approach results in highly accurate malaria detection.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"94 1","pages":"16:1-16:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74408271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"User guided generation of corroded objects","authors":"N. Jain, P. Kalra, R. Ranjan, Subodh Kumar","doi":"10.1145/3009977.3010031","DOIUrl":"https://doi.org/10.1145/3009977.3010031","url":null,"abstract":"Rendering of corrosion often requires pain-staking modeling and texturing. On the other hand, there exist techniques for stochastic modeling of corrosion, which can automatically perform simulation and rendering under control of some user-specified parameters. Unfortunately, these parameters are non-intuitive and have a global impact. It is hard to determine the values of these parameters to obtain a desired look. For example, in real life corrosion gets influenced by both internal object-specific geometric factors, like sharp corners and curvatures, and external interventions like scratches, blemishes etc. Further, a graphics designer may want to selectively corrode areas to obtain a particular scene. We present a technique for user guided spread of corrosion. Our framework encapsulates both structural and aesthetic factors. Given the material properties and the surrounding environmental conditions of an object, we employ a physio-chemically based stochastic model to deduce the decay of different points on that object. Our system equips the user with a platform where the imperfections can be provided by either manual or systematic interference on a rendering of the three dimensional object. We demonstrate several user guided characteristic simulations encompassing varied influences including material, object characteristics and environment conditions. Our results are visually validated to understand the impact of imperfections with elapsed time.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"5 1","pages":"89:1-89:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82299752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu
{"title":"Analyzing object categories via novel category ranking measures defined on visual feature embeddings","authors":"Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu","doi":"10.1145/3009977.3010037","DOIUrl":"https://doi.org/10.1145/3009977.3010037","url":null,"abstract":"Visualizing 2-D/3-D embeddings of image features can help gain an intuitive understanding of the image category landscape. However, popular visualization methods of visualizing such embeddings (e.g. color-coding by category) are impractical when the number of categories is large. To address this and other shortcomings, we propose novel quantitative measures defined on image feature embeddings. Each measure produces a ranked ordering of the categories and provides an intuitive vantage point from which to view the entire set of categories. As an experimental testbed, we use deep features obtained from category-epitomes, a recently introduced minimalist visual representation, across 160 object categories. We embed the features in a visualization-friendly yet similarity-preserving 2-D manifold and analyze the inter/intra-category distributions of these embeddings using the proposed measures. Our analysis demonstrates that the category ordering methods enable new insights for the domain of large-category object representations. Moreover, our ordering measure approach is general in nature and can be applied to any feature-based representation of categories.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"53 1","pages":"79:1-79:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83263374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}