2017 IEEE International Conference on Image Processing (ICIP)最新文献

筛选
英文 中文
Data-driven assimilation of irregularly-sampled image time series 不规则采样图像时间序列的数据驱动同化
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8297094
Ronan Fablet, Phi Huynh Viet, Redouane Lguensat, B. Chapron
{"title":"Data-driven assimilation of irregularly-sampled image time series","authors":"Ronan Fablet, Phi Huynh Viet, Redouane Lguensat, B. Chapron","doi":"10.1109/ICIP.2017.8297094","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297094","url":null,"abstract":"We address in this paper the reconstruction of irregurlarly-sampled image time series with an emphasis on geophysical remote sensing data. We develop a data-driven approach, referred to as an analog assimilation and stated as an ensemble Kalman method. Contrary to model-driven assimilation models, we do not exploit a physically-derived dynamic prior but we build a data-driven dynamic prior from a representative dataset of the considered image dynamics. Our contribution is here to extend analog assimilation to images, which involve high-dimensional state space. We combine patch-based representations to a multiscale PCA-constrained decomposition. Numerical experiments for the interpolation of missing data in satellite-derived ocean remote sensing images demonstrate the relevance of the proposed scheme. It outperforms the classical optimal interpolation with a relative RMSE gain of about 50% for the considered case study.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124987202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An adaptive perceptual quantization method for HDR video coding 一种HDR视频编码的自适应感知量化方法
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296437
Yi Liu, N. Sidaty, W. Hamidouche, O. Déforges, G. Valenzise, Emin Zerman
{"title":"An adaptive perceptual quantization method for HDR video coding","authors":"Yi Liu, N. Sidaty, W. Hamidouche, O. Déforges, G. Valenzise, Emin Zerman","doi":"10.1109/ICIP.2017.8296437","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296437","url":null,"abstract":"This paper presents a new adaptive perceptual quantization method for the High Dynamic Range (HDR) content. This method considers the luminance distribution of the HDR image as well as the Minimum Detectable Contrast (MDC) thresholds to preserve the contrast information during quantization. Base on this method, we develop a mapping function for HDR video compression and apply it to a HEVC Main 10 Profile-based video coding chain. Our experiments show that the proposed mapping function can efficiently improve the quality of the reconstructed HDR video in both objective and subjective assessments.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121290969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Transferring CNNS to multi-instance multi-label classification on small datasets 将CNNS转换为小数据集上的多实例多标签分类
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296498
Mingzhi Dong, Kunkun Pang, Yang Wu, Jing-Hao Xue, Timothy M. Hospedales, T. Ogasawara
{"title":"Transferring CNNS to multi-instance multi-label classification on small datasets","authors":"Mingzhi Dong, Kunkun Pang, Yang Wu, Jing-Hao Xue, Timothy M. Hospedales, T. Ogasawara","doi":"10.1109/ICIP.2017.8296498","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296498","url":null,"abstract":"Image tagging is a well known challenge in image processing. It is typically addressed through multi-instance multi-label (MIML) classification methodologies. Convolutional Neural Networks (CNNs) possess great potential to perform well on MIML tasks, since multi-level convolution and max pooling coincide with the multi-instance setting and the sharing of hidden representation may benefit multi-label modeling. However, CNNs usually require a large amount of carefully labeled data for training, which is hard to obtain in many real applications. In this paper, we propose a new approach for transferring pre-trained deep networks such as VGG16 on Imagenet to small MIML tasks. We extract features from each group of the network layers and apply multiple binary classifiers to them for multi-label prediction. Moreover, we adopt an L1-norm regularized Logistic Regression (L1LR) to find the most effective features for learning the multi-label classifiers. The experiment results on two most-widely used and relatively small benchmark MIML image datasets demonstrate that the proposed approach can substantially outperform the state-of-the-art algorithms, in terms of all popular performance metrics.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128832201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A data-driven approach to feature space selection for robust micro-endoscopic image reconstruction 基于数据驱动的微内窥镜图像鲁棒重建特征空间选择方法
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296680
P. Bourdon, D. Helbert
{"title":"A data-driven approach to feature space selection for robust micro-endoscopic image reconstruction","authors":"P. Bourdon, D. Helbert","doi":"10.1109/ICIP.2017.8296680","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296680","url":null,"abstract":"In the article we propose a new on-line feature space selection strategy for displacement field estimation in the context of multi-view reconstruction of biological images acquired by a multi-photon micro-endoscope. While the high variety of targets encountered in clinical endoscopy induce enough texture feature variability to prohibit the use of recent supervised learning or feature matching-based visual tracking methods, we will show how on-line learning combined with a classical method such as Digital Image Correlation (DIC) can contribute to the improvement of convex optimization-based template matching techniques.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132210913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Performance comparison of Bayesian iterative algorithms for three classes of sparsity enforcing priors with application in computed tomography 贝叶斯迭代算法在三类稀疏增强先验的性能比较及其在计算机断层扫描中的应用
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296949
Mircea Dumitru, Li Wang, N. Gac, A. Mohammad-Djafari
{"title":"Performance comparison of Bayesian iterative algorithms for three classes of sparsity enforcing priors with application in computed tomography","authors":"Mircea Dumitru, Li Wang, N. Gac, A. Mohammad-Djafari","doi":"10.1109/ICIP.2017.8296949","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296949","url":null,"abstract":"The piecewise constant or homogeneous image reconstruction in the context of X-ray Computed Tomography is considered within a Bayesian approach. More precisely, the sparse transformation of such images is modelled with heavy tailed distributions expressed as Normal variance mixtures marginals. The derived iterative algorithms (via Joint Maximum A Posteriori) have identical updating expressions, except for the estimated variances. We show that the behaviour of the each algorithm is different in terms of sensibility to the model selection and reconstruction performances when applied in Computed Tomography.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121286025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
The Wits Intelligent Teaching System: Detecting student engagement during lectures using convolutional neural networks Wits智能教学系统:使用卷积神经网络检测学生在讲座中的参与度
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296804
Richard Klein, T. Çelik
{"title":"The Wits Intelligent Teaching System: Detecting student engagement during lectures using convolutional neural networks","authors":"Richard Klein, T. Çelik","doi":"10.1109/ICIP.2017.8296804","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296804","url":null,"abstract":"To perform contingent teaching and be responsive to students' needs during class, lecturers must be able to quickly assess the state of their audience. While effective teachers are able to gauge easily the affective state of the students, as class sizes grow this becomes increasingly difficult and less precise. The Wits Intelligent Teaching System (WITS) aims to assist lecturers with real-time feedback regarding student affect. The focus is primarily on recognising engagement or lack thereof. Student engagement is labelled based on behaviour and postures that are common to classroom settings. These proxies are then used in an observational checklist to construct a dataset of engagement upon which a CNN based on AlexNet is successfully trained and which significantly outperforms a Support Vector Machine approach. The deep learning approach provides satisfactory results on a challenging, real-world dataset with significant occlusion, lighting and resolution constraints.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129377648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Unsupervised hyperspectral band selection via multi-feature information-maximization clustering 基于多特征信息最大化聚类的无监督高光谱波段选择
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296339
M. Bevilacqua, Y. Berthoumieu
{"title":"Unsupervised hyperspectral band selection via multi-feature information-maximization clustering","authors":"M. Bevilacqua, Y. Berthoumieu","doi":"10.1109/ICIP.2017.8296339","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296339","url":null,"abstract":"This paper presents a new approach for unsupervised band selection in the context of hyperspectral imaging. The hyperspectral band selection (HBS) task is considered as a clustering problem: bands are clustered in the image space; one representative image is then kept for each cluster, to be part of the set of selected bands. The proposed clustering method falls into the family of information-maximization clustering, where mutual information between data features and cluster assignments is maximized. Inspired by a clustering method of this family, we adapt it to the HBS problem and extend it to the case of multiple image features. A pixel selection step is also integrated to reduce the spatial support of the feature vectors, thus mitigating the curse of dimensionality. Experiments with different standard data sets show that the bands selected with our algorithm lead to higher classification performance, in comparison with other state-of-the-art HBS methods.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133439159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Deep regional feature pooling for video matching 基于深度区域特征池的视频匹配
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296307
Yan Bai, Jie Lin, V. Chandrasekhar, Yihang Lou, Shiqi Wang, Ling-yu Duan, Tiejun Huang, A. Kot
{"title":"Deep regional feature pooling for video matching","authors":"Yan Bai, Jie Lin, V. Chandrasekhar, Yihang Lou, Shiqi Wang, Ling-yu Duan, Tiejun Huang, A. Kot","doi":"10.1109/ICIP.2017.8296307","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296307","url":null,"abstract":"In this work, we study the problem of deep global descriptors for video matching with regional feature pooling. We aim to analyze the joint effect of ROI (Region of Interest) size and pooling moment on video matching performance. To this end, we propose to mathematically model the distribution of video matching function with a pooling function nested in. Matching performance can be estimated by the separability of these class-conditional distributions between matching and non-matching pairs. Empirical studies on the challenging MPEG CDVA dataset demonstrate that performance trends are consistent with the estimation and experimental results, though the theoretical model is largely simplified compared to video matching and retrieval in practice.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133961776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep discovery of facial motions using a shallow embedding layer 使用浅嵌入层深度发现面部运动
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296545
Afsane Ghasemi, Mahsa Baktash, S. Denman, S. Sridharan, Dung Nguyen Tien, C. Fookes
{"title":"Deep discovery of facial motions using a shallow embedding layer","authors":"Afsane Ghasemi, Mahsa Baktash, S. Denman, S. Sridharan, Dung Nguyen Tien, C. Fookes","doi":"10.1109/ICIP.2017.8296545","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296545","url":null,"abstract":"Unique encoding of the dynamics of facial actions has potential to provide a spontaneous facial expression recognition system. The most promising existing approaches rely on deep learning of facial actions. However, current approaches are often computationally intensive and require a great deal of memory/processing time, and typically the temporal aspect of facial actions are often ignored, despite the potential wealth of information available from the spatial dynamic movements and their temporal evolution over time from neutral state to apex state. To tackle aforementioned challenges, we propose a deep learning framework by using the 3D convolutional filters to extract spatio-temporal features, followed by the LSTM network which is able to integrate the dynamic evolution of short-duration of spatio-temporal features as an emotion progresses from the neutral state to the apex state. In order to reduce the redundancy of parameters and accelerate the learning of the recurrent neural network, we propose a shallow embedding layer to reduce the number of parameters in the LSTM by up to 98% without sacrificing recognition accuracy. As the fully connected layer approximately contains 95% of the parameters in the network, we decrease the number of parameters in this layer before passing features to the LSTM network, which significantly improves training speed and enables the possibility of deploying a state of the art deep network on real-time applications. We evaluate our proposed framework on the DISFA and UNBC-McMaster Shoulder pain datasets.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"836 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116423399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Reflectance-based surface saliency 基于反射率的表面显著性
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296320
Gilles Pitard, Gaëtan Le Goïc, A. Mansouri, H. Favrelière, Maurice Pillet, S. George, J. Hardeberg
{"title":"Reflectance-based surface saliency","authors":"Gilles Pitard, Gaëtan Le Goïc, A. Mansouri, H. Favrelière, Maurice Pillet, S. George, J. Hardeberg","doi":"10.1109/ICIP.2017.8296320","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296320","url":null,"abstract":"In this paper, we propose an original methodology allowing the computation of the saliency maps for high dimensional RTI data (Reflectance Transformation Imaging). Unlike most of the classical methods, our approach aims at devising an intrinsic visual saliency of the surface, independent of the sensor (image) and the geometry of the scene (light-object-camera). From RTI data, we use the DMD (Discrete Modal Decomposition) technique for the angular reflectance reconstruction, which we extend by a new transformation on the modal basis enabling a rotation-invariant representation of reconstructed reflectances. This orientation-invariance of the resulting reflectance shapes fosters a robust estimation of saliency maps linked to the local visual appearance behaviour of surfaces on the scene. The proposed methodology has been tested and validated on real surfaces with controlled singularities, and the results demonstrated its efficiency since the estimated saliency maps show strong correlation with sensorial visual assessments.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129934978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信