2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)最新文献

筛选
英文 中文
A Convolutional Neural Network for Automatic Analysis of Aerial Imagery 基于卷积神经网络的航空图像自动分析
F. Maire, Luis Mejías Alvarez, A. Hodgson
{"title":"A Convolutional Neural Network for Automatic Analysis of Aerial Imagery","authors":"F. Maire, Luis Mejías Alvarez, A. Hodgson","doi":"10.1109/DICTA.2014.7008084","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008084","url":null,"abstract":"This paper introduces a new method to automate the detection of marine species in aerial imagery using a Machine Learning approach. Our proposed system has at its core, a convolutional neural network. We compare this trainable classifier to a handcrafted classifier based on color features, entropy and shape analysis. Experiments demonstrate that the convolutional neural network outperforms the handcrafted solution. We also introduce a negative training example-selection method for situations where the original training set consists of a collection of labeled images in which the objects of interest (positive examples) have been marked by a bounding box. We show that picking random rectangles from the background is not necessarily the best way to generate useful negative examples with respect to learning.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131076431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
A Multiple Features Distance Preserving (MFDP) Model for Saliency Detection 一种基于多特征距离保持的显著性检测模型
Dongyan Guo, Jian Zhang, Min Xu, Xiangjian He, Minxian Li, Chunxia Zhao
{"title":"A Multiple Features Distance Preserving (MFDP) Model for Saliency Detection","authors":"Dongyan Guo, Jian Zhang, Min Xu, Xiangjian He, Minxian Li, Chunxia Zhao","doi":"10.1109/DICTA.2014.7008087","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008087","url":null,"abstract":"Playing a vital role, saliency has been widely applied for various image analysis tasks, such as content-aware image retargeting, image retrieval and object detection. It is generally accepted that saliency detection can benefit from the integration of multiple visual features. However, most of the existing literatures fuse multiple features at saliency map level without considering cross-feature information, i.e. generate a saliency map based on several maps computed from an individual feature. In this paper, we propose a Multiple Feature Distance Preserving (MFDP) model to seamlessly integrate multiple visual features through an alternative optimization process. Our method outperforms the state-of-the-arts methods on saliency detection. Saliency detected by our method is further cooperated with seam carving algorithm and significantly improves the performance on image retargeting.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116363160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supervised Latent Dirichlet Allocation Models for Efficient Activity Representation 有效活动表示的监督潜Dirichlet分配模型
Sabanadesan Umakanthan, S. Denman, C. Fookes, S. Sridharan
{"title":"Supervised Latent Dirichlet Allocation Models for Efficient Activity Representation","authors":"Sabanadesan Umakanthan, S. Denman, C. Fookes, S. Sridharan","doi":"10.1109/DICTA.2014.7008130","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008130","url":null,"abstract":"Local spatio-temporal features with a Bag-of-visual words model is a popular approach used in human action recognition. Bag-of-features methods suffer from several challenges such as extracting appropriate appearance and motion features from videos, converting extracted features appropriate for classification and designing a suitable classification framework. In this paper we address the problem of efficiently representing the extracted features for classification to improve the overall performance. We introduce two generative supervised topic models, maximum entropy discrimination LDA (MedLDA) and class- specific simplex LDA (css-LDA), to encode the raw features suitable for discriminative SVM based classification. Unsupervised LDA models disconnect topic discovery from the classification task, hence yield poor results compared to the baseline Bag-of-words framework. On the other hand supervised LDA techniques learn the topic structure by considering the class labels and improve the recognition accuracy significantly. MedLDA maximizes likelihood and within class margins using max-margin techniques and yields a sparse highly discriminative topic structure; while in css-LDA separate class specific topics are learned instead of common set of topics across the entire dataset. In our representation first topics are learned and then each video is represented as a topic proportion vector, i.e. it can be comparable to a histogram of topics. Finally SVM classification is done on the learned topic proportion vector. We demonstrate the efficiency of the above two representation techniques through the experiments carried out in two popular datasets. Experimental results demonstrate significantly improved performance compared to the baseline Bag-of-features framework which uses kmeans to construct histogram of words from the feature vectors.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"169 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128616051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Novel Multi-Modal Image Registration Method Based on Corners 一种基于角点的多模态图像配准方法
Guohua Lv, S. Teng, Guojun Lu
{"title":"A Novel Multi-Modal Image Registration Method Based on Corners","authors":"Guohua Lv, S. Teng, Guojun Lu","doi":"10.1109/DICTA.2014.7008090","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008090","url":null,"abstract":"This paper presents a novel method for registering multi-modal images, based on corners. The proposed method is motivated by the fact that large content differences are likely to occur in multi-modal images. Unlike traditional multi-modal image registration methods that utilize intensities or gradients for feature representation, we propose to use curvatures of corners. Moreover, a novel local descriptor called Distribution of Edge Pixels Along Contour (DEPAC) is proposed to represent the neighborhood of corners. Curvature and DEPAC similarities are combined in our method to improve the registration accuracy. Using a set of benchmark multi-modal images and multi-modal microscopic images, we demonstrate that our proposed method outperforms an existing state-of-the-art image registration method.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131087533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The W-Penalty and Its Application to Alpha Matting with Sparse Labels w惩罚及其在带稀疏标签的Alpha抠图中的应用
Stephen Tierney, Junbin Gao, Yi Guo
{"title":"The W-Penalty and Its Application to Alpha Matting with Sparse Labels","authors":"Stephen Tierney, Junbin Gao, Yi Guo","doi":"10.1109/DICTA.2014.7008132","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008132","url":null,"abstract":"Alpha matting is an ill-posed problem, as such the user must supply dense partial labels for an acceptable solution to be reached. Unfortunately this labelling can be time consuming. In this paper we introduce the w-penalty function, which when incorporated into existing matting techniques allows users to supply extremely sparse input. The formulated objective function encourages driving matte values to 0 and 1. The experiments demonstrate the proposed model outperforms the state-of-the-art KNN matting algorithm. MATLAB code for our proposed method is freely available in the MatteKit package.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125464789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multiple Features Based Low-Contrast Infrared Ship Image Segmentation Using Fuzzy Inference System 基于模糊推理系统的多特征低对比度红外舰船图像分割
Tao Wang, X. Bai, Yu Zhang
{"title":"Multiple Features Based Low-Contrast Infrared Ship Image Segmentation Using Fuzzy Inference System","authors":"Tao Wang, X. Bai, Yu Zhang","doi":"10.1109/DICTA.2014.7008117","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008117","url":null,"abstract":"Infrared (IR) ship image segmentation is a challenging task due to defects of IR images, such as low-contrast, sea clutters, noises and etc. Aiming to solve this problem, we propose a multiple features based IR ship image segmentation method using fuzzy inference system (FIS). Because of complexness of the low-contrast IR image, the ship target cannot be segmented by only one kind of feature. Thus we extract multiple features from IR image to sufficiently represent the ship target. As the FIS can well handle the uncertainty of IR image and express expert knowledge with fuzzy rules, multiple features are input to FIS, then the ship target can be simply extracted from the output of FIS. In this paper, the proposed method is implemented as follows. Firstly, intensity is chosen as the first input of FIS, because it is fundamental feature of ship target in IR image. Secondly, the spatial feature is constructed through saliency detection, region growing and morphology processing, which is used to represent spatial constrain of ship target region. Thirdly, the multiple features are fuzzified with adaptive methods and prior knowledge. Fourthly, the fuzzified features are well combined through FIS, according to the fuzzy rules based on expert knowledge. Finally, the intact ship target segmentation can be simply extracted through the output of the FIS. Experimental results show that our method can effectively extracts the complete and precise ship targets from the low-contrast IR ship images. Moreover, our method performs better than other existed segmentation methods.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"52 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124310811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A High-Precision Registration Method Based on Auxiliary Sphere Targets 一种基于辅助球目标的高精度配准方法
Junhui Huang, Zhao Wang, Weihua Bao, Jianmin Gao
{"title":"A High-Precision Registration Method Based on Auxiliary Sphere Targets","authors":"Junhui Huang, Zhao Wang, Weihua Bao, Jianmin Gao","doi":"10.1109/DICTA.2014.7008085","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008085","url":null,"abstract":"High-precision data registration is a key to ensure the high-precision three-dimensional profile measurement. In order to register the featureless surface or the surface with non-overlapping data, this paper proposes a new registration method based on auxiliary sphere targets. Combined with ICP algorithm the new method adopts the sphere targets for fitting continuous spheres and providing spherical constraint. The continuous spheres provide the registration features and more accurate corresponding point pairs for high-precision registration. The spherical constraint restricts the non-overlapping measurement point clouds to be aligned to the sphere targets. Simulation and experiments verify the effectiveness of the proposed method.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126306999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Partial Fingerprint Matching through Region-Based Similarity 基于区域相似度的部分指纹匹配
Omid Zanganeh, B. Srinivasan, Nandita Bhattacharjee
{"title":"Partial Fingerprint Matching through Region-Based Similarity","authors":"Omid Zanganeh, B. Srinivasan, Nandita Bhattacharjee","doi":"10.1109/DICTA.2014.7008121","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008121","url":null,"abstract":"Despite advances in fingerprint matching, partial/incomplete/fragmentary fingerprint recognition remains a challenging task. While miniaturization of fingerprint scanners limits the capture of only part of the fingerprint, there is also special interest in processing latent fingerprints which are likely to be partial and of low quality. Partial fingerprints do not include all the structures available in a full fingerprint, hence a suitable matching technique which is independent of specific fingerprint features is required. Common fingerprint recognition methods are based on fingerprint minutiae which do not perform well when applied to low quality images and might not even be suitable for partial fingerprint recognition. To overcome this drawback, in this research, a region-based fingerprint recognition method is proposed in which the fingerprints are compared in a pixel- wise manner by computing their correlation coefficient. Therefore, all the attributes of the fingerprint contribute in the matching decision. Such a technique is promising to accurately recognise a partial fingerprint as well as a full fingerprint compared to the minutiae-based fingerprint recognition methods.The proposed method is based on simple but effective metrics that has been defined to compute local similarities which is then combined into a global score such that it is less affected by distribution skew of the local similarities. Extensive experiments over Fingerprint Verification Competition (FVC) data set proves the superiority of the proposed method compared to other techniques in literature.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125725490","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Robust Visual Tracking via Rank-Constrained Sparse Learning 基于秩约束稀疏学习的鲁棒视觉跟踪
B. Bozorgtabar, Roland Göcke
{"title":"Robust Visual Tracking via Rank-Constrained Sparse Learning","authors":"B. Bozorgtabar, Roland Göcke","doi":"10.1109/DICTA.2014.7008129","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008129","url":null,"abstract":"In this paper, we present an improved low-rank sparse learning method for particle filter based visual tracking, which we denote as rank-constrained sparse learning. Since each particle can be sparsely represented by a linear combination of the bases from an adaptive dictionary, we exploit the underlying structure between particles by constraining the rank of particle sparse representations jointly over the adaptive dictionary. Besides utilising a common structure among particles, the proposed tracker also suggests the most discriminative features for particle representations using an additional feature selection module employed in the proposed objective function. Furthermore, we present an efficient way to solve this learning problem by connecting the low-rank structure extracted from particles to a simpler learning problem in the devised discriminative subspace. The suggested way improves the overall computational complexity for the high-dimensional particle candidates. Finally, in order to achieve a more robust tracker, we augment the sparse representation of particles with adaptive weights, which indicate similarity between candidates and the dictionary templates. The proposed approach is extensively evaluated on the VOT 2013 visual tracking evaluation platform including 16 challenging sequences. Experimental results compared to state-of-the-art methods show the robustness and effectiveness of the proposed tracker.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127353013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Semi-Quantitative Analysis Model with Parabolic Modelling for DCE-MRI Sequences of Prostate 前列腺DCE-MRI序列的抛物线半定量分析模型
G. Samarasinghe, A. Sowmya, D. Moses
{"title":"A Semi-Quantitative Analysis Model with Parabolic Modelling for DCE-MRI Sequences of Prostate","authors":"G. Samarasinghe, A. Sowmya, D. Moses","doi":"10.1109/DICTA.2014.7008092","DOIUrl":"https://doi.org/10.1109/DICTA.2014.7008092","url":null,"abstract":"Dynamic Contrast Enhanced Magnetic Resonance Resonance Imaging (DCE-MRI), also called perfusion Magnetic Resonance Imaging, is an advanced Magnetic Resonance Imaging (MRI) modality used in non-invasive diagnosis of Prostate Cancer. In this paper we propose a novel semi-quantitative model to represent perfusion behaviour of 3-dimensional prostate voxels in DCE-MRI sequences based on parametric evaluation of parabolic polynomials. Perfusion data of each prostate voxel is modelled on to a best fit parabolic function using second order non-linear regression. Then a single parameter is derived using geometric parameters of the parabola to represent the amount and rapidity of signal intensity enhancement of the voxel against the contrast enhancement agent. Finally prostate voxels are classified using k-means clustering based on the parameter derived by the proposed model. A qualitative evaluation was performed and the classification results represented as graphical summarizations of perfusion MR data for 70 axial DCE-MRI slices of 10 patients by an expert radiologist. The results show that the proposed semi- quantitative model and the parameter derived from the model have the potential to be used in manual observations or in Computer- Aided Diagnosis (CAD) systems for prostate cancer recognition.","PeriodicalId":146695,"journal":{"name":"2014 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121675350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信