2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)最新文献

筛选
英文 中文
No-reference quality assessment for contrast-distorted image 对比度失真图像的无参考质量评价
Jun Wu, Zhaoqiang Xia, Yifeng Ren, Huifang Li
{"title":"No-reference quality assessment for contrast-distorted image","authors":"Jun Wu, Zhaoqiang Xia, Yifeng Ren, Huifang Li","doi":"10.1109/IPTA.2016.7820968","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820968","url":null,"abstract":"Contrast change is a special type of image distortion which is vitally important for visual perception of image quality, while little investigates has been dedicated to the contrast-distorted images. A proper contrast change not only reduces human visual perception, instead of improving it. This characteristic determines that full-reference way cannot assess contrast-distorted images properly. In this paper, we propose a no-reference way for contrast-distorted image assessment. Five statistical features are extracted from the distortion image, and two features are extracted from the phase congruence (PC) map of distortion image. These features and human mean opinion scores (MOS) of training images are jointly utilized to train a model of support vector regression (SVR). The quality of testing image is evaluated by this learned model. Experiments on CCID2014 database demonstrate the promising performance of the proposed metric.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125295615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automatic detection, tracking and counting of birds in marine video content 海洋视频内容中鸟类的自动检测、跟踪和计数
R. T'Jampens, F. Hernandez, F. Vandecasteele, S. Verstockt
{"title":"Automatic detection, tracking and counting of birds in marine video content","authors":"R. T'Jampens, F. Hernandez, F. Vandecasteele, S. Verstockt","doi":"10.1109/IPTA.2016.7821031","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821031","url":null,"abstract":"Robust automatic detection of moving objects in a marine context is a multi-faceted problem due to the complexity of the observed scene. The dynamic nature of the sea caused by waves, boat wakes, and weather conditions poses huge challenges for the development of a stable background model. Moreover, camera motion, reflections, lightning and illumination changes may contribute to false detections. Dynamic background subtraction (DBGS) is widely considered as a solution to tackle this issue in the scope of vessel detection for maritime traffic analysis. In this paper, the DBGS techniques suggested for ships are investigated and optimized for the monitoring and tracking of birds in marine video content. In addition to background subtraction, foreground candidates are filtered by a classifier based on their feature descriptors in order to remove non-bird objects. Different types of classifiers have been evaluated and results on a ground truth labeled dataset of challenging video fragments show similar levels of precision and recall of about 95% for the best performing classifier. The remaining foreground items are counted and birds are tracked along the video sequence using spatio-temporal motion prediction. This allows marine scientists to study the presence and behavior of birds.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122444781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Multiple description coding based on enhanced X-tree 基于增强x树的多重描述编码
C. Cai, J. Chen, H. Zeng
{"title":"Multiple description coding based on enhanced X-tree","authors":"C. Cai, J. Chen, H. Zeng","doi":"10.1109/IPTA.2016.7820953","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820953","url":null,"abstract":"This paper proposes a multiple description image coding scheme based on 2D dual-tree transform and the enhanced x-tree encoding method. The input image is firstly mapped into 2D dual-tree discrete wavelet domain to form 2 wavelet coefficient trees. A sparse algorithm is then used to remove the most redundant wavelet coefficients resulted from dual-tree discrete wavelet transform (DDWT), forming the basic component of both descriptions, respectively. In order to improve the quality of the side reconstruction, a side sparse algorithm is then imposed on two sparse coefficient trees to produce the additional for both side decoding. The basic information from one tree and additional information from the other are sent to an enhanced x-tree encoder, which is proposed to exploit the strong correlation between two wavelet trees resulted from DDWT, forming the bitstream of a description. Since each description includes the basic information and part of details of the input image, even one of the descriptions gets lost, the reconstructed image can still keep acceptable quality. Simulation results have verified that the proposed algorithm has good coding performance and error resilient ability.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121782114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On pain assessment from facial videos using spatio-temporal local descriptors 基于时空局部描述符的面部视频疼痛评估
Ruijing Yang, Shujun Tong, Miguel Bordallo López, Elhocine Boutellaa, Jinye Peng, Xiaoyi Feng, A. Hadid
{"title":"On pain assessment from facial videos using spatio-temporal local descriptors","authors":"Ruijing Yang, Shujun Tong, Miguel Bordallo López, Elhocine Boutellaa, Jinye Peng, Xiaoyi Feng, A. Hadid","doi":"10.1109/IPTA.2016.7820930","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820930","url":null,"abstract":"Automatically recognizing pain from spontaneous facial expression is of increased attention, since it can provide for a direct and relatively objective indication to pain experience. Until now, most of the existing works have focused on analyzing pain from individual images or video-frames, hence discarding the spatio-temporal information that can be useful in the continuous assessment of pain. In this context, this paper investigates and quantifies for the first time the role of the spatio-temporal information in pain assessment by comparing the performance of several baseline local descriptors used in their traditional spatial form against their spatio-temporal counterparts that take into account the video dynamics. For this purpose, we perform extensive experiments on two benchmark datasets. Our results indicate that using spatio-temporal information to classify video-sequences consistently shows superior performance when compared against the one obtained using only static information.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132800468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Feature covariance for human action recognition 人类动作识别的特征协方差
Alexandre Perez, Hedi Tabia, D. Declercq, A. Zanotti
{"title":"Feature covariance for human action recognition","authors":"Alexandre Perez, Hedi Tabia, D. Declercq, A. Zanotti","doi":"10.1109/IPTA.2016.7820982","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820982","url":null,"abstract":"In this paper, we present a novel method for human action recognition using covariance features. Computationally efficient action features are extracted from the skeleton of the subject performing the action. They aim to capture relative positions of the joints and motion over time. These features are encoded into a compact representation using a covariance matrix. We evaluate the performance of the proposed method and demonstrate its superiority compared to related state-of-the-art methods on various datasets, including the MSR Action 3D, the MSR Daily Activity 3D and the UTKinect-Action dataset.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114812368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A multidimensional scaling optimization and fusion approach for the unsupervised change detection problem in remote sensing images 遥感图像无监督变化检测问题的多维尺度优化与融合方法
Redha Touati, M. Mignotte
{"title":"A multidimensional scaling optimization and fusion approach for the unsupervised change detection problem in remote sensing images","authors":"Redha Touati, M. Mignotte","doi":"10.1109/IPTA.2016.7821021","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821021","url":null,"abstract":"It is generally well known that the overall performance of the most widely used types of unsupervised change detection methods, based on the luminance pixel-wise difference, is mainly relied on the quality of the so-called difference image and the accuracy of the classification method. In order to address these two issues, this work proposes to first estimate, a new and robust similarity feature map, playing the same role as the difference image, by specifying a set of constraints expressed for each pair of pixels existing in the multitemporal images. As a consequence, the proposed change detection method does not require any preprocessing step of the multitemporal images such as radiometric correction/normalization. In addition, input data can be acquired from different sensors. The quadratic complexity in the number of pixels of this new similarity feature map, between the multitemporal images, is reduced to a linear complexity procedure thanks to the FastMap-based optimization algorithm. Second, in order to achieve more robustness, changes are then identified, from this similarity feature map, by combining (fusing) the results of different automatic thresholding algorithms. Experimental results confirm the robustness of the proposed approach.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115399417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Face spoofing detection with image quality regression 基于图像质量回归的人脸欺骗检测
Haoliang Li, Shiqi Wang, A. Kot
{"title":"Face spoofing detection with image quality regression","authors":"Haoliang Li, Shiqi Wang, A. Kot","doi":"10.1109/IPTA.2016.7821027","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821027","url":null,"abstract":"Face spoofing detection nowadays has attracted attentions regarding the biometrics authentication issue. Inspired by the observation that face spoofing detection is highly relevant with the inherent image quality which also strongly depends on the properties of the capturing devices and conditions, in this paper, we tackle the spoofing detection problem based on a two-stage learning approach. Firstly, we manually cluster the training samples based on the prior knowledge of face sample quality (e.g. camera model), and multiple quality-guided classifiers are trained based on each cluster with extracted image quality assessment (IQA) feature. Subsequently, a regression function is learned by mapping from the IQA scores to the corresponding classifier's parameters, which can be further used for classification. As such, given a new face input for verification, we can predict its classifier's coefficients based on the pre-learned regression model, with which spoofing detection can be effectively achieved. Experimental results show that we achieve significantly better classification performance compared with the strategy that directly applies the IQA features with single classifier.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122555332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Classification of mammographic microcalcification clusters using a combination of topological and location modelling 使用拓扑和位置建模相结合的乳房x线照相微钙化簇的分类
Oluwaseun Ashiru, R. Zwiggelaar
{"title":"Classification of mammographic microcalcification clusters using a combination of topological and location modelling","authors":"Oluwaseun Ashiru, R. Zwiggelaar","doi":"10.1109/IPTA.2016.7820986","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7820986","url":null,"abstract":"We have investigated the classification of micro-calcification clusters in mammograms by combining two existing approaches. One of the approaches involves extracting and using topological information (connectivity) about micro-calcification clusters as feature vectors to classify them as being benign or malignant. The other approach involves extracting and using location details of micro-calcification clusters (where they appear in a breast and/or mammogram) as feature vectors to classify them as being benign or malignant. We have investigated various aspects of both methods and their combination. Our initial results, based on MIAS and DDSM indicate no significant improvement over the topological approach on its own.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129663556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic detection of cervical cells in Pap-smear images using polar transform and k-means segmentation 基于极坐标变换和k均值分割的巴氏涂片图像子宫颈细胞自动检测
M. Neghina, C. Rasche, M. Ciuc, Alina Sultana, Ciprian Tiganesteanu
{"title":"Automatic detection of cervical cells in Pap-smear images using polar transform and k-means segmentation","authors":"M. Neghina, C. Rasche, M. Ciuc, Alina Sultana, Ciprian Tiganesteanu","doi":"10.1109/IPTA.2016.7821038","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821038","url":null,"abstract":"We introduce a novel method of cell detection and segmentation based on a polar transformation. The method assumes that the seed point of each candidate is placed inside the nucleus. The polar representation, built around the seed, is segmented using k-means clustering into one candidate-nucleus cluster, one candidate-cytoplasm cluster and up to three miscellaneous clusters, representing background or surrounding objects that are not part of the candidate cell. For assessing the natural number of clusters, the silhouette method is used. In the segmented polar representation, a number of parameters can be conveniently observed and evaluated as fuzzy memberships to the non-cell class, out of which the final decision can be determined. We tested this method on the notoriously difficult Pap-smear images and report results for a database of approximately 20000 patches.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128233150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
MCMC based sampling technique for robust multi-model fitting and visual data segmentation 基于MCMC的鲁棒多模型拟合和可视化数据分割采样技术
Alireza Sadri, Ruwan Tennakoon, R. Hoseinnezhad, A. Bab-Hadiashar
{"title":"MCMC based sampling technique for robust multi-model fitting and visual data segmentation","authors":"Alireza Sadri, Ruwan Tennakoon, R. Hoseinnezhad, A. Bab-Hadiashar","doi":"10.1109/IPTA.2016.7821022","DOIUrl":"https://doi.org/10.1109/IPTA.2016.7821022","url":null,"abstract":"This paper approaches the problem of geometric multi-model fitting as a data segmentation problem which is solved by a sequence of sampling, model selection and clustering steps. We propose a sampling method that significantly facilitates solving the segmentation problem using the Normalized cut. The sampler is a novel application of Markov-Chain-Monte-Carlo (MCMC) method to sample from a distribution in the parameter space that is obtained by modifying the Least kth Order Statistics cost function. To sample from this distribution effectively, our proposed Markov Chain includes novel long and short jumps to ensure exploration and exploitation of all structures. It also includes fast local optimization steps to target all, even fairly small, putative structures. This leads to a clustering solution through which final model parameters for each segment are obtained. The method competes favorably with the state-of-the-art both in terms of computation power and segmentation accuracy.","PeriodicalId":123429,"journal":{"name":"2016 Sixth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121200970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信