{"title":"Robust segmentation of corneal fibers from noisy images","authors":"Jia Chen, J. Jester, M. Gopi","doi":"10.1145/3009977.3010051","DOIUrl":"https://doi.org/10.1145/3009977.3010051","url":null,"abstract":"Corneal collagen structure, which plays an important role in determining visual acuity, has drawn a lot of research attention to exploring its geometric properties. Advancement of nonlinear optical (NLO) imaging provides a potential way for capturing fiber-level structure of cornea, however, the artifacts introduced by the NLO imaging process make image segmentation on such images a bottleneck for further analysis. Especially, the existing methods fail to preserve the branching points which are important for mechanical analysis. In this paper, we propose a hybrid image segmentation method, which integrates seeded region growing and iterative voting. Results show that our algorithm outperforms state-of-the-art techniques in segmenting fibers from background while preserving branching points. Finally, we show that, based on the segmentation result, branching points and the width of fibers can be determined more accurately than the other methods, which is critical for mechanical analysis on corneal structure.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"128 1","pages":"58:1-58:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82784025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Robust pedestrian tracking using improved tracking-learning-detection algorithm","authors":"Ritika Verma, I. Sreedevi","doi":"10.1145/3009977.3009999","DOIUrl":"https://doi.org/10.1145/3009977.3009999","url":null,"abstract":"Manual analysis of pedestrians for surveillance of large crowds in real time applications is not practical. Tracking-Learning-Detection suggested by Kalal, Mikolajczyk and Matas [1] is one of the most prominent automatic object tracking system. TLD can track single object and can handle occlusion and appearance change but it suffers from limitations. In this paper, tracking of multiple objects and estimation of their trajectory is suggested using improved TLD. Feature tracking is suggested in place of grid based tracking to solve the limitation of tracking during out of plane rotation. This also leads to optimization of algorithm. Proposed algorithm also achieves auto-initialization with detection of pedestrians in the first frame which makes it suitable for real time pedestrian tracking.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"08 1","pages":"35:1-35:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85950954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A stratified registration framework for DSA artifact reduction using random walker","authors":"Manivannan Sundarapandian, K. Ramakrishnan","doi":"10.1145/3009977.3010066","DOIUrl":"https://doi.org/10.1145/3009977.3010066","url":null,"abstract":"In Digital Subtraction Angiography (DSA), non-rigid registration of the mask and contrast images to reduce the motion artifacts is a challenging problem. In this paper, we have proposed a novel stratified registration framework for DSA artifact reduction. We use quad-trees to generate the non-uniform grid of control points and obtain the sub-pixel displacement offsets using Random Walker (RW). We have also proposed a sequencing logic for the control points and an incremental LU decomposition approach that enables reuse of the computations in the RW step. We have tested our approach using clinical data sets, and found that our registration framework has performed comparable to the graph-cuts (at the same partition level), in regions wherein 95% artifact reduction was achieved. The optimization step achieves a speed improvement of 4.2 times with respect to graph-cuts.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"8 1","pages":"68:1-68:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85034710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Iris recognition using partial sum of second order Taylor series expansion","authors":"B. H. Shekar, S. S. Bhat","doi":"10.1145/3009977.3010065","DOIUrl":"https://doi.org/10.1145/3009977.3010065","url":null,"abstract":"Iris is presently one among the most sought after traits in biometric research. Extracting well-suited features from iris has been a favourite topic of the researchers. This paper proposes a novel iris feature extraction technique based on partial sum of second order Taylor series expansion (TSE). The finite sum of TSE computed on an arbitrary small neighbourhood on multiple scales can approximate the function extremely well and hence provides a powerful mechanism to extract the complex natured localised features of iris structure. To compute the higher order derivatives of TSE, we propose kernel structures by extending the Sobel operators. Extensive experiments are conducted with multiple scales on IITD, MMU v-2 and CASIA v-4 distance databases and comparative analysis is performed with the existing algorithms to substantiate the performance of the proposed method.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"11 1","pages":"81:1-81:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82900181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fast frontier detection in indoor environment for monocular SLAM","authors":"Sarthak Upadhyay, K. Krishna, S. Kumar","doi":"10.1145/3009977.3010063","DOIUrl":"https://doi.org/10.1145/3009977.3010063","url":null,"abstract":"Frontier detection is a critical component in autonomous exploration, wherein the robot decides the next best location to move in order to continue its mapping process. The existing frontier detection methods require dense reconstruction which is difficult to attain in a poorly textured indoor environment using a monocular camera. In this effort, we present an alternate method of detecting frontiers during the course of robot motion that circumvents the requirement of dense mapping. Based on the observation that frontiers typically occur around areas with sudden change in texture (zero-crossings), we propose a novel linear chain Conditional Random Field(CRF) formulation that is able to detect the presence or absence of frontier regions around such areas. We use cues like spread of 3D points and scene change around these areas as an observation to CRF. We demonstrate that this method gives us more relevant frontiers compared to other monocular camera based methods in the literature. Finally, we present results in an indoor environment, wherein frontiers are reliably detected around walls leading to new corridors, doors leading to new rooms or corridors and tables and other objects that open up to a new space in rooms.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"75 1","pages":"39:1-39:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83794189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An image analysis approach for transcription of music played on keyboard-like instruments","authors":"Souvik Deb, Ajit V. Rajwade","doi":"10.1145/3009977.3010007","DOIUrl":"https://doi.org/10.1145/3009977.3010007","url":null,"abstract":"Music transcription refers to the process of analyzing a piece of music to generate a sequence of constituent notes and their duration. Transcription of music from audio signals is fraught with problems due to auditory interference such as ambient noise, multiple instruments playing simultaneously, accompanying vocals or polyphonic sounds. For several instruments, there exists added information for music transcription which can be derived from a video sequence of the instrument as it is being played. This paper proposes a method to utilize this visual information for the case of keyboard-like instruments to generate a transcript automatically, by analyzing the video frames. We present encouraging results under varying lighting conditions on different song sequences played out on a keyboard.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"41 1","pages":"5:1-5:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80556385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju
{"title":"Mosaicing deep underwater imagery","authors":"Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju","doi":"10.1145/3009977.3010029","DOIUrl":"https://doi.org/10.1145/3009977.3010029","url":null,"abstract":"Numerous sources of distortions render mosaicing of underwater (UW) images an immensely challenging effort. Methods that can process conventional photographs (terrestrial/aerial) fail to deliver the desired results on UW images. Taking the sources of underwater degradations into account is central to ensuring quality performance. The work described in this paper specifically deals with the problem of mosaicing deep UW images captured by Remotely Operated Vehicles (ROVs). These images are mainly degraded by haze, color changes, and non-uniform illumination. We propose a framework that restores these images in accordance with a suitably derived degradation model. Furthermore, our scheme harnesses the scene geometry information present in each image to aid in constructing a mosaic that is free from artifacts such as local blurring, ghosting, double contouring and visible seams. Several experiments on real underwater images sequences have been carried out to demonstrate the performance of our mosaicing pipeline along with comparisons.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"33 1","pages":"74:1-74:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam
{"title":"Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images","authors":"G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam","doi":"10.1145/3009977.3010024","DOIUrl":"https://doi.org/10.1145/3009977.3010024","url":null,"abstract":"Malaria is a deadly infectious disease affecting red blood cells in humans due to the protozoan of type Plasmodium. In 2015, there is an estimated death toll of 438, 000 patients out of the total 214 million malaria cases reported world-wide. Thus, building an accurate automatic system for detecting the malarial cases is beneficial and has huge medical value. This paper addresses the detection of Plasmodium Falciparum infected RBCs from Leishman's stained microscope slide images. Unlike the traditional way of examining a single focused image to detect the parasite, we make use of a focus stack of images collected using a bright field microscope. Rather than the conventional way of extracting the specific features we opt for using Convolutional Neural Network that can directly operate on images bypassing the need for hand-engineered features. We work with image patches at the suspected parasite location there by avoiding the need for cell segmentation. We experiment, report and compare the detection rate received when only a single focused image is used and when operated on the focus stack of images. Altogether the proposed novel approach results in highly accurate malaria detection.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"94 1","pages":"16:1-16:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74408271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the (soccer) ball","authors":"Samriddha Sanyal, A. Kundu, D. Mukherjee","doi":"10.1145/3009977.3010022","DOIUrl":"https://doi.org/10.1145/3009977.3010022","url":null,"abstract":"The problem of tracking ball in a soccer video is challenging because of sudden change in speed and orientation of the soccer ball. Successful tracking in such a scenario depends on the ability of the algorithm to balance prior constraints continuously against the evidence garnered from the sequences of images. This paper proposes a particle filter based algorithm that tracks the ball when it changes its direction suddenly or takes high speed. Exact, deterministic tracking algorithms based on discretized functional, suffer from severe limitations in the form of prior constraints. Our tracking algorithm has shown excellent result even for partial occlusion which is a major concern in soccer video. We have shown that the proposed tracking algorithm is at least 7.2% better compared to competing approaches for soccer ball tracking.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"68 1","pages":"53:1-53:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74936824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Event geo-localization and tracking from crowd-sourced video metadata","authors":"Amit More, S. Chaudhuri","doi":"10.1145/3009977.3009993","DOIUrl":"https://doi.org/10.1145/3009977.3009993","url":null,"abstract":"We propose a novel technique for event geo-localization (i.e. 2-D location of the event on the surface of the earth) from the sensor metadata of crowd-sourced videos collected from smartphone devices. With the help of sensors available in the smartphone devices, such as digital compass and GPS receiver, we collect metadata information such as camera viewing direction and location along with the video. The event localization is then posed as a constrained optimization problem using available sensor metadata. Our results on the collected experimental data shows correct localization of events, which is particularly challenging for classical vision based methods because of the nature of the visual data. Since we only use sensor metadata in our approach, computational overhead is much less compared to what would be if video information is used. At the end, we illustrate the benefits of our work in analyzing the video data from multiple sources through geo-localization.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"138 1","pages":"24:1-24:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79788019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}