2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)最新文献

筛选
英文 中文
Unsupervised Long-Term Routine Modelling Using Dynamic Bayesian Networks 基于动态贝叶斯网络的无监督长期例程建模
Yangdi Xu, David Bull, D. Damen
{"title":"Unsupervised Long-Term Routine Modelling Using Dynamic Bayesian Networks","authors":"Yangdi Xu, David Bull, D. Damen","doi":"10.1109/DICTA.2017.8227502","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227502","url":null,"abstract":"Routine can be defined as the frequent and regular activity patterns over a specified timescale (e.g. daily/weekly routine). In this work, we capture routine patterns for a single person from long- term visual data using a Dynamic Bayesian Network (DBN). Assuming a person always performs purposeful activities at corresponding locations; spatial, pose and time-of-day information are used as sources of input for routine modelling. We assess variations of the independence assumptions within the DBN model among selected features. Unlike traditional models that are supervisedly trained, we automatically select the number of hidden states for fully unsupervised discovery of a single person's indoor routine. We emphasize unsupervised learning as it is practically unrealistic to obtain ground-truth labels for long term behaviours. The datasets used in this work are long term recordings of non-scripted activities in their native environments, each lasting for six days. The first captures the routine of three individuals in an office kitchen; the second is recorded in a residential kitchen. We evaluate the routine by comparing to ground-truth when present, using exhaustive search to relate discovered patterns to ground-truth ones. We also propose a graphical visualisation to represent and qualitatively evaluate the discovered routine.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"72 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131137391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Learning Variance Kernelized Correlation Filters for Robust Visual Object Tracking 基于学习方差核相关滤波器的鲁棒视觉目标跟踪
Chenghuan Liu, D. Huynh, Mark Reynolds
{"title":"Learning Variance Kernelized Correlation Filters for Robust Visual Object Tracking","authors":"Chenghuan Liu, D. Huynh, Mark Reynolds","doi":"10.1109/DICTA.2017.8227458","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227458","url":null,"abstract":"Visual tracking is a very challenging problem in computer vision as the performance of a tracking algorithm may be degraded due to many challenging issues in the scenes, such as illumination change, deformation, and background clutter. So far no algorithms can handle all these challenging issues. Recently, it has been shown that correlation filters can be implemented efficiently and, with suitable features and kernel functions incorporated, can give very promising tracking results. In this paper, we propose to learn discriminative correlation filters that incorporate information from the variances of the target's appearance features. We have evaluated our filters against several recent tracking methods on the OTB benchmark dataset. Our results show that the additional feature variances help to improve the robustness of the correlation filters in complex scenes.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133267621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Judging Aesthetic Quality in Paintings Based on Artistic Inspired Color Features 从艺术灵感色彩特征判断绘画的审美品质
S. A. Amirshahi, Joachim Denzler
{"title":"Judging Aesthetic Quality in Paintings Based on Artistic Inspired Color Features","authors":"S. A. Amirshahi, Joachim Denzler","doi":"10.1109/DICTA.2017.8227452","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227452","url":null,"abstract":"In this work, using a new set of color features in the field of computer vision and image processing which are inspired by the work of artists, we try to classify different subjective properties of paintings, including aesthetic quality, beauty, and liking of color. We then investigate if observers have individual tastes and opinions when evaluating different properties of artworks. The extracted features are used in a 5-fold CV SVM to classify scores provided by individual observers. This work in fact, confirms that the properties which could be related to aesthetic issues in paintings are \"in the eye of the beholder\". In other words, using the scores provided by the individual observers, we are not only able to reach a higher classification rate but also can find which properties are important for an observer when evaluating an image. Finally, we compare our proposed color features to a set of state-of-the-art color features used in the field of computer vision.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116967317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Multi-Feature Kernel Discriminant Dictionary Learning for Classification in Alzheimer's Disease 多特征核判别字典学习在阿尔茨海默病分类中的应用
Qing Li, Xia Wu, Lele Xu, L. Yao, Kewei Chen
{"title":"Multi-Feature Kernel Discriminant Dictionary Learning for Classification in Alzheimer's Disease","authors":"Qing Li, Xia Wu, Lele Xu, L. Yao, Kewei Chen","doi":"10.1109/DICTA.2017.8227467","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227467","url":null,"abstract":"Classification of Alzheimer 's disease (AD) from normal control (NC) is important for disease abnormality identification and intervention. The current study focused on distinguishing AD from NC based on the multi-feature kernel supervised within- class-similarity discriminative dictionary learning algorithm (MKSCDDL) we introduced previously, which has been derived outperformance in face recognition. Structural magnetic resonance imaging (sMRI), fluorodeoxyglucose (FDG) positron emission tomography (PET) and florbetapir-PET data from the Alzheimer's disease Neuroimaging Initiative (ADNI) database were adopted for classification between AD and NC. Adopting MKSCDDL, not only the classification accuracy achieved 98.18% for AD vs. NC, which were superior to the results of some other state-of-the-art approaches (MKL, JRC, and mSRC), but also testing time achieved outperforming results. The MKSCDDL procedure was a promising tool in assisting early diseases diagnosis using neuroimaging data.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117280327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
HOSO: Histogram of Surface Orientation for RGB-D Salient Object Detection 面向RGB-D显著目标检测的表面方向直方图
David Feng, N. Barnes, Shaodi You
{"title":"HOSO: Histogram of Surface Orientation for RGB-D Salient Object Detection","authors":"David Feng, N. Barnes, Shaodi You","doi":"10.1109/DICTA.2017.8227440","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227440","url":null,"abstract":"Salient object detection using RGB-D data is an emerging field in computer vision. Salient regions are often characterized by an unusual surface orientation profile with respect to the surroundings. To capture such profile, we introduce the histogram of surface orientation (HOSO) feature to measure surface orientation distribution contrast for RGB-D saliency. We propose a new unified model that integrates surface orientation distribution contrast with depth and color contrast across multiple scales. This model is implemented in a multi-stage saliency computation approach that performs contrast estimation using a kernel density estimator (KDE), estimates object positions from the low-level saliency map, and finally refines the estimated object positions with a graph cut based approach. Our method is evaluated on two RGB-D salient object detection databases, achieving superior performance to previous state-of-the-art methods.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116085135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fast and Robust Multi-Modal Image Registration for 3D Knee Kinematics 快速鲁棒的三维膝关节运动学多模态图像配准
Shabnam Saadat, M. Pickering, D. Perriman, J. Scarvell, Paul N. Smith
{"title":"Fast and Robust Multi-Modal Image Registration for 3D Knee Kinematics","authors":"Shabnam Saadat, M. Pickering, D. Perriman, J. Scarvell, Paul N. Smith","doi":"10.1109/DICTA.2017.8227434","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227434","url":null,"abstract":"The process of spatially aligning two or more images acquired from different devices or imaging protocols is known as multi-modal image registration. As the similarity measure used is one of the most significant aspects of this process, certain measures have been proposed to enhance multi-modal image registration. However, the currently available measures are either not sufficiently accurate or are very computationally expensive. In this paper, a new hybrid multimodal registration approach is proposed. The new approach combines a fast measure, based on matching image edges, with a robust, but slow measure, which uses the joint probability distribution of the two images to be registered. Our experimental results reveal that using this hybrid approach provides a performance equivalent to the previously best measures but with a significantly reduced computational time.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121366656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Deformable and Occluded Object Tracking via Graph Learning 基于图学习的可变形和遮挡对象跟踪
Wei Han, G. Huang, Dongshun Cui
{"title":"Deformable and Occluded Object Tracking via Graph Learning","authors":"Wei Han, G. Huang, Dongshun Cui","doi":"10.1109/DICTA.2017.8227424","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227424","url":null,"abstract":"Object deformation and occlusion are ubiquitous problems for visual tracking. Though many efforts have been made to handle object deformation and occlusion, most existing tracking algorithms fail in case of large deformation and severe occlusion. In this paper, we propose a graph learning-based tracking framework to handle both challenges. For each consecutive frame pair, we construct a weighted graph, in which the nodes are the local parts of both frames. Our algorithm optimizes the graph similarity matrix until two disconnected subgraphs separate the foreground and background nodes. We assign foreground/background labels to the current frame nodes based on the learned graph and estimate the object bounding box under an optimization framework with the predicted foreground parts. Experimental results on the Deform-SOT dataset shows that the proposed method achieves the state-of-the-art performance.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115457119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ubiquitous Document Capturing with Deep Learning 泛在文档捕获与深度学习
T. Naz, A. A. Khan, F. Shafait
{"title":"Ubiquitous Document Capturing with Deep Learning","authors":"T. Naz, A. A. Khan, F. Shafait","doi":"10.1109/DICTA.2017.8227501","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227501","url":null,"abstract":"Digital and paper based documents co-exist in our daily lives. Seamless integration of information from both sources is crucial for efficient knowledge management. This paper address the algorithm that can handle the detection of document so that it can be captured easily to convert it into a digital form for automatic integration of relevant information in electronic work flows. It uses the deep learning technique to provide a solution which is more generalized and flexible than other available solutions.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126773180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyper-Feature Based Tracking with the Fully-Convolutional Siamese Network 基于超特征的全卷积Siamese网络跟踪
Yangliu Kuai, G. Wen, Dongdong Li
{"title":"Hyper-Feature Based Tracking with the Fully-Convolutional Siamese Network","authors":"Yangliu Kuai, G. Wen, Dongdong Li","doi":"10.1109/DICTA.2017.8227442","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227442","url":null,"abstract":"Convolutional neural network (CNN) has drawn increasing interest in visual tracking, among which fully-convolutional Siamese network based method (SiamFC) is quite popular due to its competitive performance in both precision and efficiency. Generally, SiamFC captures robust semantics from high-level features in the last layer but ignores detailed spatial features in earlier layers, thus tending to drift towards similar target regions in the search area. In this paper, we design a skip-layer connection network on the basis of SiamFC to aggregate hierarchical feature maps and constitute the hyper- feature representations of target, considering that convolutional layers in different levels characterize target from different perspectives and the lower-level feature maps of SiamFC is computed beforehand. The Hyper-features well incorporate deep but highly semantic, intermediate but really complementary, and shallow but naturally high-resolution representations. The designed network is trained end-to-end offline similar to SiamFC on the ILSVRC2015 dataset and later used for online tracking. Experimental results on OTB benchmark show that the proposed algorithm performs favourably against many state-of-the-art trackers in terms of accuracy while maintaining real-time tracking speed.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115262866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Double-Guided Filtering: Image Smoothing with Structure and Texture Guidance 双导滤波:图像平滑与结构和纹理指导
Kaiyue Lu, Shaodi You, N. Barnes
{"title":"Double-Guided Filtering: Image Smoothing with Structure and Texture Guidance","authors":"Kaiyue Lu, Shaodi You, N. Barnes","doi":"10.1109/DICTA.2017.8227425","DOIUrl":"https://doi.org/10.1109/DICTA.2017.8227425","url":null,"abstract":"Image smoothing is a fundamental technology which aims to preserve image structure and remove insignificant texture. Balancing the trade-off between preserving structure and suppressing texture, however, is not a trivial task. This is because existing methods rely on only one guidance to infer structure or texture and assume the other is dependent. However, in many cases, textures are composed of repetitive structures and difficult to be distinguished by only one guidance. In this paper, we aim to better solve the trade-off by applying two independent guidance for structure and texture. Specifically, we adopt semantic edge detection as structure guidance, and texture decomposition as texture guidance. Based on this, we propose a kernel-based image smoothing method called the double-guided filter (DGF). In the paper, for the first time, we introduce the concept of texture guidance, and DGF, the first kernel- based method that leverages structure and texture guidance at the same time to be both 'structure- aware' and 'texture-aware'. We present a number of experiments to show the effectiveness of the proposed filter.","PeriodicalId":194175,"journal":{"name":"2017 International Conference on Digital Image Computing: Techniques and Applications (DICTA)","volume":"51 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114129019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信