2010 International Conference on Digital Image Computing: Techniques and Applications最新文献

筛选
英文 中文
An Efficient Frequency-Domain Velocity-Filter Implementation for Dim Target Detection 一种用于弱小目标检测的高效频域速度滤波实现
H. L. Kennedy
{"title":"An Efficient Frequency-Domain Velocity-Filter Implementation for Dim Target Detection","authors":"H. L. Kennedy","doi":"10.1109/DICTA.2010.16","DOIUrl":"https://doi.org/10.1109/DICTA.2010.16","url":null,"abstract":"An efficient Fourier-domain implementation of the velocity filter is presented. The Sliding Discrete Fourier Transform (SDFT) is exploited to yield a Track-Before-Detect (TBD) algorithm with a complexity that is independent of the filter integration time. As a consequence, dim targets near the noise floor of acquisition or surveillance sensors may be detected, and their states estimated, at a relatively low computational cost. The performance of the method is demonstrated using real sensor data. When processing the acquired data, the SDFT implementation is approximately 3 times faster than the equivalent Fast Fourier Transform (FFT) implementation and 16 times faster than the corresponding spatiotemporal implementation.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"167 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116892467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Color Constancy-Based Visibility Enhancement in Low-Light Conditions 低光条件下基于颜色常数的可视性增强
Jing Yu, Q. Liao
{"title":"Color Constancy-Based Visibility Enhancement in Low-Light Conditions","authors":"Jing Yu, Q. Liao","doi":"10.1109/DICTA.2010.81","DOIUrl":"https://doi.org/10.1109/DICTA.2010.81","url":null,"abstract":"Imaging in low-light conditions is often significantly degraded by insufficient lighting and color cast. Poor visibility becomes a major problem for many applications of computer vision. In this paper, we propose a novel color constancy-based method to enhance the visibility of low-light images. The proposed method applies an appropriate color constancy algorithm to the active set of pixels across the image. The post-processing step is also added to enhance the global contrast and lightness. Results on a wide variety of images demonstrate that the proposed method can achieve good rendition for lightness, contrast and color fidelity without graying-out artifacts or halo artifacts intrinsically present in Retinex approaches.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116349130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Empirical Study of Multi-label Classification Methods for Image Annotation and Retrieval 图像标注与检索的多标签分类方法实证研究
G. Nasierding, A. Kouzani
{"title":"Empirical Study of Multi-label Classification Methods for Image Annotation and Retrieval","authors":"G. Nasierding, A. Kouzani","doi":"10.1109/DICTA.2010.113","DOIUrl":"https://doi.org/10.1109/DICTA.2010.113","url":null,"abstract":"This paper presents an empirical study of multi-label classification methods, and gives suggestions for multi-label classification that are effective for automatic image annotation applications. The study shows that triple random ensemble multi-label classification algorithm (TREMLC) outperforms among its counterparts, especially on scene image dataset. Multi-label k-nearest neighbor (ML-kNN) and binary relevance (BR) learning algorithms perform well on Corel image dataset. Based on the overall evaluation results, examples are given to show label prediction performance for the algorithms using selected image examples. This provides an indication of the suitability of different multi-label classification methods for automatic image annotation under different problem settings.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124485506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Scarf: Semi-automatic Colorization and Reliable Image Fusion 丝巾:半自动上色和可靠的图像融合
Anwaar Ul Haq, I. Gondal, M. Murshed
{"title":"Scarf: Semi-automatic Colorization and Reliable Image Fusion","authors":"Anwaar Ul Haq, I. Gondal, M. Murshed","doi":"10.1109/DICTA.2010.80","DOIUrl":"https://doi.org/10.1109/DICTA.2010.80","url":null,"abstract":"Nighttime imagery poses significant challenges to its enhancement due to loss of color information and limitation of single sensor to capture complete visual information at night. To cope with this challenge, multiple sensors are used to capture reliable nighttime imagery which presents additional demands for reliable visual information fusion. In this paper, we present a system, Scarf, which proposes reliable image fusion using advanced feature extraction techniques and a novel semi-automatic colorization based on optimization conformal to human visual system. Subjective and objective quality evaluation proves the effectiveness of proposed system.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130458141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Segmentation of Dense 2D Bacilli Populations 密集二维芽孢杆菌种群的分割
P. Vallotton, L. Turnbull, C. Whitchurch, Lisa Mililli
{"title":"Segmentation of Dense 2D Bacilli Populations","authors":"P. Vallotton, L. Turnbull, C. Whitchurch, Lisa Mililli","doi":"10.1109/DICTA.2010.23","DOIUrl":"https://doi.org/10.1109/DICTA.2010.23","url":null,"abstract":"Bacteria outnumber all other known organisms by far so there is considerable interest in characterizing them in detail and in measuring their diversity, evolution, and dynamics. Here, we present a system capable of identifying rod-like bacteria (bacilli) correctly in high resolution phase contrast images. We use a probabilistic model together with several purpose-designed image features in order to split bacteria at the septum consistently. Our method commits less than 1% error on test images. Our method should also be applicable to study dense 2D systems composed of elongated elements, such as some viruses, molecules, parasites (plasmodium, euglena), diatoms, and crystals.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"8 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126798547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Texture-Based Estimation of Physical Characteristics of Sand Grains 基于纹理的砂粒物理特性估计
A. Newell, Lewis D. Griffin, R. Morgan, P. A. Bull
{"title":"Texture-Based Estimation of Physical Characteristics of Sand Grains","authors":"A. Newell, Lewis D. Griffin, R. Morgan, P. A. Bull","doi":"10.1109/DICTA.2010.91","DOIUrl":"https://doi.org/10.1109/DICTA.2010.91","url":null,"abstract":"The common occurrence and transportability of quartz sand grains make them useful for forensic analysis, providing that grains can be accurately and consistently designated into prespecified types. Recent advances in the analysis of surface texture features found in scanning electron microscopy images of such grains have advanced this process. However, this requires expert knowledge that is not only time intensive, but also rare, meaning that automation is a highly attractive prospect if it were possible to achieve good levels of performance. Basic Image Feature Columns (BIF Columns), which use local symmetry type to produce a highly invariant yet distinctive encoding, have shown leading performance in standard texture recognition tasks used in computer vision. However, the system has not previously been tested on a real world problem. Here we demonstrate that the BIF Column system offers a simple yet effective solution to grain classification using surface texture. In a two class problem, where human level performance is expected to be perfect, the system classifies all but one grain from a sample of 88 correctly. In a harder task, where expert human performance is expected to be significantly less than perfect, our system achieves a correct classification rate of over 80%, with clear indications that performance can be improved if a larger dataset were available. Furthermore, very little tuning or adaptation has been necessary to achieve these results giving cause for optimism in the general applicability of this system to other texture classification problems in forensic analysis.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125228394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Multiple Views Tracking of Maritime Targets 海上目标的多视角跟踪
Thomas Albrecht, G. West, T. Tan, Thanh Ly
{"title":"Multiple Views Tracking of Maritime Targets","authors":"Thomas Albrecht, G. West, T. Tan, Thanh Ly","doi":"10.1109/DICTA.2010.59","DOIUrl":"https://doi.org/10.1109/DICTA.2010.59","url":null,"abstract":"This paper explores techniques for multiple views target tracking in a maritime environment using a mobile surveillance platform. We utilise an omnidirectional camera to capture full spherical video and use an Inertial Measurement Unit (IMU) to estimate the platform's ego-motion. For each target a part of the omnidirectional video is extracted, forming a corresponding set of virtual cameras. Each target is then tracked using a dynamic template matching method and particle filtering. Its predictions are then used to continuously adjust the orientations of the virtual cameras, keeping a lock on the targets. We demonstrate the performance of the application in several real-world maritime settings.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131106448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Expression-Invariant 3D Face Recognition Using Patched Geodesic Texture Transform 基于补丁测地线纹理变换的表情不变三维人脸识别
F. Hajati, A. Raie, Yongsheng Gao
{"title":"Expression-Invariant 3D Face Recognition Using Patched Geodesic Texture Transform","authors":"F. Hajati, A. Raie, Yongsheng Gao","doi":"10.1109/DICTA.2010.52","DOIUrl":"https://doi.org/10.1109/DICTA.2010.52","url":null,"abstract":"Numerous methods have been proposed for the expression-invariant 3D face recognition, but a little attention is given to the local-based representation for the texture of the 3D images. In this paper, we propose an expression-invariant 3D face recognition approach based on the locally extracted moments of the texture when only one exemplar per person is available. We use a geodesic texture transform accompanied by Pseudo Zernike Moments to extract local feature vectors from the texture of a face. An extensive experimental investigation is conducted using publicly available BU-3DFE face databases covering face recognition under expression variations. The performance of the proposed method is compared with the performance of two benchmark approaches. The encouraging experimental results demonstrate that the proposed method can be used for 3D face recognition in single model databases.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"31 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132757367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
An Enhancement to Closed-Form Method for Natural Image Matting 自然图像抠图中封闭形式方法的改进
Jun Zhu, Dengsheng Zhang, Guojun Lu
{"title":"An Enhancement to Closed-Form Method for Natural Image Matting","authors":"Jun Zhu, Dengsheng Zhang, Guojun Lu","doi":"10.1109/DICTA.2010.110","DOIUrl":"https://doi.org/10.1109/DICTA.2010.110","url":null,"abstract":"Natural image matting is a task to estimate fractional opacity of foreground layer from an image. Many matting methods have been proposed, and most of them are trimap-based. Among these methods, closed-form matting offers both trimap-based and scribble-based matting. However, the closed-form method causes significant errors at background-hole regions due to over-smoothing. In this paper, we identify the source of the problem and propose our solution to enhance the closed-form method. Experiments show that our enhanced method can improve the accuracy for trimap-based images and obtain similar result to the closed-form method for scribble-based matting.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133806926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Focusing the Normalised Information Distance on the Relevant Information Content for Image Similarity 将归一化信息距离聚焦在相关信息内容上实现图像相似性
Joselíto J. Chua, P. Tischer
{"title":"Focusing the Normalised Information Distance on the Relevant Information Content for Image Similarity","authors":"Joselíto J. Chua, P. Tischer","doi":"10.1109/DICTA.2010.10","DOIUrl":"https://doi.org/10.1109/DICTA.2010.10","url":null,"abstract":"This paper investigates the normalised information distance (NID) proposed by Bennet et~al~(1998) as an approach to measure the visual similarity (or dissimilarity) of images. Earlier studies suggest that compression-based approximations to the NID can yield dissimilarity measures that correlate well with visual comparisons. However, results also indicate that conventional feature-based dissimilarity measures often outperform those that are based on the NID. This paper proposes that a theoretical decomposition of the NID can help explain why the NID-based dissimilarity measures might not perform well compared to feature-based approaches. The theoretical decomposition considers the perceptually relevant and irrelevant information content for image similarity. We illustrate how the NID-based dissimilarity measures could be improved by discarding the irrelevant information, and applying the NID on only the relevant information.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132441246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信
小红书