Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing最新文献

筛选
英文 中文
Robust segmentation of corneal fibers from noisy images 噪声图像中角膜纤维的鲁棒分割
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010051
Jia Chen, J. Jester, M. Gopi
{"title":"Robust segmentation of corneal fibers from noisy images","authors":"Jia Chen, J. Jester, M. Gopi","doi":"10.1145/3009977.3010051","DOIUrl":"https://doi.org/10.1145/3009977.3010051","url":null,"abstract":"Corneal collagen structure, which plays an important role in determining visual acuity, has drawn a lot of research attention to exploring its geometric properties. Advancement of nonlinear optical (NLO) imaging provides a potential way for capturing fiber-level structure of cornea, however, the artifacts introduced by the NLO imaging process make image segmentation on such images a bottleneck for further analysis. Especially, the existing methods fail to preserve the branching points which are important for mechanical analysis. In this paper, we propose a hybrid image segmentation method, which integrates seeded region growing and iterative voting. Results show that our algorithm outperforms state-of-the-art techniques in segmenting fibers from background while preserving branching points. Finally, we show that, based on the segmentation result, branching points and the width of fibers can be determined more accurately than the other methods, which is critical for mechanical analysis on corneal structure.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"128 1","pages":"58:1-58:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82784025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robust pedestrian tracking using improved tracking-learning-detection algorithm 基于改进跟踪-学习-检测算法的鲁棒行人跟踪
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3009999
Ritika Verma, I. Sreedevi
{"title":"Robust pedestrian tracking using improved tracking-learning-detection algorithm","authors":"Ritika Verma, I. Sreedevi","doi":"10.1145/3009977.3009999","DOIUrl":"https://doi.org/10.1145/3009977.3009999","url":null,"abstract":"Manual analysis of pedestrians for surveillance of large crowds in real time applications is not practical. Tracking-Learning-Detection suggested by Kalal, Mikolajczyk and Matas [1] is one of the most prominent automatic object tracking system. TLD can track single object and can handle occlusion and appearance change but it suffers from limitations. In this paper, tracking of multiple objects and estimation of their trajectory is suggested using improved TLD. Feature tracking is suggested in place of grid based tracking to solve the limitation of tracking during out of plane rotation. This also leads to optimization of algorithm. Proposed algorithm also achieves auto-initialization with detection of pedestrians in the first frame which makes it suitable for real time pedestrian tracking.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"08 1","pages":"35:1-35:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85950954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A stratified registration framework for DSA artifact reduction using random walker 一种基于随机漫步器的分层配准框架
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010066
Manivannan Sundarapandian, K. Ramakrishnan
{"title":"A stratified registration framework for DSA artifact reduction using random walker","authors":"Manivannan Sundarapandian, K. Ramakrishnan","doi":"10.1145/3009977.3010066","DOIUrl":"https://doi.org/10.1145/3009977.3010066","url":null,"abstract":"In Digital Subtraction Angiography (DSA), non-rigid registration of the mask and contrast images to reduce the motion artifacts is a challenging problem. In this paper, we have proposed a novel stratified registration framework for DSA artifact reduction. We use quad-trees to generate the non-uniform grid of control points and obtain the sub-pixel displacement offsets using Random Walker (RW). We have also proposed a sequencing logic for the control points and an incremental LU decomposition approach that enables reuse of the computations in the RW step. We have tested our approach using clinical data sets, and found that our registration framework has performed comparable to the graph-cuts (at the same partition level), in regions wherein 95% artifact reduction was achieved. The optimization step achieves a speed improvement of 4.2 times with respect to graph-cuts.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"8 1","pages":"68:1-68:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85034710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Iris recognition using partial sum of second order Taylor series expansion 利用二阶泰勒级数展开的部分和进行虹膜识别
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010065
B. H. Shekar, S. S. Bhat
{"title":"Iris recognition using partial sum of second order Taylor series expansion","authors":"B. H. Shekar, S. S. Bhat","doi":"10.1145/3009977.3010065","DOIUrl":"https://doi.org/10.1145/3009977.3010065","url":null,"abstract":"Iris is presently one among the most sought after traits in biometric research. Extracting well-suited features from iris has been a favourite topic of the researchers. This paper proposes a novel iris feature extraction technique based on partial sum of second order Taylor series expansion (TSE). The finite sum of TSE computed on an arbitrary small neighbourhood on multiple scales can approximate the function extremely well and hence provides a powerful mechanism to extract the complex natured localised features of iris structure. To compute the higher order derivatives of TSE, we propose kernel structures by extending the Sobel operators. Extensive experiments are conducted with multiple scales on IITD, MMU v-2 and CASIA v-4 distance databases and comparative analysis is performed with the existing algorithms to substantiate the performance of the proposed method.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"11 1","pages":"81:1-81:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82900181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fast frontier detection in indoor environment for monocular SLAM 单目SLAM在室内环境下的快速边界检测
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010063
Sarthak Upadhyay, K. Krishna, S. Kumar
{"title":"Fast frontier detection in indoor environment for monocular SLAM","authors":"Sarthak Upadhyay, K. Krishna, S. Kumar","doi":"10.1145/3009977.3010063","DOIUrl":"https://doi.org/10.1145/3009977.3010063","url":null,"abstract":"Frontier detection is a critical component in autonomous exploration, wherein the robot decides the next best location to move in order to continue its mapping process. The existing frontier detection methods require dense reconstruction which is difficult to attain in a poorly textured indoor environment using a monocular camera. In this effort, we present an alternate method of detecting frontiers during the course of robot motion that circumvents the requirement of dense mapping. Based on the observation that frontiers typically occur around areas with sudden change in texture (zero-crossings), we propose a novel linear chain Conditional Random Field(CRF) formulation that is able to detect the presence or absence of frontier regions around such areas. We use cues like spread of 3D points and scene change around these areas as an observation to CRF. We demonstrate that this method gives us more relevant frontiers compared to other monocular camera based methods in the literature. Finally, we present results in an indoor environment, wherein frontiers are reliably detected around walls leading to new corridors, doors leading to new rooms or corridors and tables and other objects that open up to a new space in rooms.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"75 1","pages":"39:1-39:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83794189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An image analysis approach for transcription of music played on keyboard-like instruments 一种图像分析方法,用于在类似键盘的乐器上演奏音乐的转录
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010007
Souvik Deb, Ajit V. Rajwade
{"title":"An image analysis approach for transcription of music played on keyboard-like instruments","authors":"Souvik Deb, Ajit V. Rajwade","doi":"10.1145/3009977.3010007","DOIUrl":"https://doi.org/10.1145/3009977.3010007","url":null,"abstract":"Music transcription refers to the process of analyzing a piece of music to generate a sequence of constituent notes and their duration. Transcription of music from audio signals is fraught with problems due to auditory interference such as ambient noise, multiple instruments playing simultaneously, accompanying vocals or polyphonic sounds. For several instruments, there exists added information for music transcription which can be derived from a video sequence of the instrument as it is being played. This paper proposes a method to utilize this visual information for the case of keyboard-like instruments to generate a transcript automatically, by analyzing the video frames. We present encouraging results under varying lighting conditions on different song sequences played out on a keyboard.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"41 1","pages":"5:1-5:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80556385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Mosaicing deep underwater imagery 拼接深水图像
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010029
Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju
{"title":"Mosaicing deep underwater imagery","authors":"Kuldeep Purohit, Subeesh Vasu, A. Rajagopalan, V. Jyothi, Ramesh Raju","doi":"10.1145/3009977.3010029","DOIUrl":"https://doi.org/10.1145/3009977.3010029","url":null,"abstract":"Numerous sources of distortions render mosaicing of underwater (UW) images an immensely challenging effort. Methods that can process conventional photographs (terrestrial/aerial) fail to deliver the desired results on UW images. Taking the sources of underwater degradations into account is central to ensuring quality performance. The work described in this paper specifically deals with the problem of mosaicing deep UW images captured by Remotely Operated Vehicles (ROVs). These images are mainly degraded by haze, color changes, and non-uniform illumination. We propose a framework that restores these images in accordance with a suitably derived degradation model. Furthermore, our scheme harnesses the scene geometry information present in each image to aid in constructing a mosaic that is free from artifacts such as local blurring, ghosting, double contouring and visible seams. Several experiments on real underwater images sequences have been carried out to demonstrate the performance of our mosaicing pipeline along with comparisons.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"33 1","pages":"74:1-74:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83641796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images 从亮场显微镜载玻片图像聚焦堆栈中自动检测疟疾感染红细胞
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010024
G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam
{"title":"Automatic detection of Malaria infected RBCs from a focus stack of bright field microscope slide images","authors":"G. Gopakumar, M. Swetha, G. S. Siva, G. R. S. Subrahmanyam","doi":"10.1145/3009977.3010024","DOIUrl":"https://doi.org/10.1145/3009977.3010024","url":null,"abstract":"Malaria is a deadly infectious disease affecting red blood cells in humans due to the protozoan of type Plasmodium. In 2015, there is an estimated death toll of 438, 000 patients out of the total 214 million malaria cases reported world-wide. Thus, building an accurate automatic system for detecting the malarial cases is beneficial and has huge medical value. This paper addresses the detection of Plasmodium Falciparum infected RBCs from Leishman's stained microscope slide images. Unlike the traditional way of examining a single focused image to detect the parasite, we make use of a focus stack of images collected using a bright field microscope. Rather than the conventional way of extracting the specific features we opt for using Convolutional Neural Network that can directly operate on images bypassing the need for hand-engineered features. We work with image patches at the suspected parasite location there by avoiding the need for cell segmentation. We experiment, report and compare the detection rate received when only a single focused image is used and when operated on the focus stack of images. Altogether the proposed novel approach results in highly accurate malaria detection.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"94 1","pages":"16:1-16:7"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74408271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
User guided generation of corroded objects 用户引导生成腐蚀对象
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010031
N. Jain, P. Kalra, R. Ranjan, Subodh Kumar
{"title":"User guided generation of corroded objects","authors":"N. Jain, P. Kalra, R. Ranjan, Subodh Kumar","doi":"10.1145/3009977.3010031","DOIUrl":"https://doi.org/10.1145/3009977.3010031","url":null,"abstract":"Rendering of corrosion often requires pain-staking modeling and texturing. On the other hand, there exist techniques for stochastic modeling of corrosion, which can automatically perform simulation and rendering under control of some user-specified parameters. Unfortunately, these parameters are non-intuitive and have a global impact. It is hard to determine the values of these parameters to obtain a desired look. For example, in real life corrosion gets influenced by both internal object-specific geometric factors, like sharp corners and curvatures, and external interventions like scratches, blemishes etc. Further, a graphics designer may want to selectively corrode areas to obtain a particular scene. We present a technique for user guided spread of corrosion. Our framework encapsulates both structural and aesthetic factors. Given the material properties and the surrounding environmental conditions of an object, we employ a physio-chemically based stochastic model to deduce the decay of different points on that object. Our system equips the user with a platform where the imperfections can be provided by either manual or systematic interference on a rendering of the three dimensional object. We demonstrate several user guided characteristic simulations encompassing varied influences including material, object characteristics and environment conditions. Our results are visually validated to understand the impact of imperfections with elapsed time.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"5 1","pages":"89:1-89:8"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82299752","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Analyzing object categories via novel category ranking measures defined on visual feature embeddings 基于视觉特征嵌入的分类排序方法分析目标类别
Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing Pub Date : 2016-12-18 DOI: 10.1145/3009977.3010037
Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu
{"title":"Analyzing object categories via novel category ranking measures defined on visual feature embeddings","authors":"Ravi Kiran Sarvadevabhatla, Raviteja Meesala, Manjunath Hegde, R. Venkatesh Babu","doi":"10.1145/3009977.3010037","DOIUrl":"https://doi.org/10.1145/3009977.3010037","url":null,"abstract":"Visualizing 2-D/3-D embeddings of image features can help gain an intuitive understanding of the image category landscape. However, popular visualization methods of visualizing such embeddings (e.g. color-coding by category) are impractical when the number of categories is large. To address this and other shortcomings, we propose novel quantitative measures defined on image feature embeddings. Each measure produces a ranked ordering of the categories and provides an intuitive vantage point from which to view the entire set of categories. As an experimental testbed, we use deep features obtained from category-epitomes, a recently introduced minimalist visual representation, across 160 object categories. We embed the features in a visualization-friendly yet similarity-preserving 2-D manifold and analyze the inter/intra-category distributions of these embeddings using the proposed measures. Our analysis demonstrates that the category ordering methods enable new insights for the domain of large-category object representations. Moreover, our ordering measure approach is general in nature and can be applied to any feature-based representation of categories.","PeriodicalId":93806,"journal":{"name":"Proceedings. Indian Conference on Computer Vision, Graphics & Image Processing","volume":"53 1","pages":"79:1-79:6"},"PeriodicalIF":0.0,"publicationDate":"2016-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83263374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信