2018 7th European Workshop on Visual Information Processing (EUVIP)最新文献

筛选
英文 中文
Compound Markov Random Field Model of Signals on Graph: An Application to Graph Learning 图上信号的复合马尔可夫随机场模型:在图学习中的应用
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611758
S. Colonnese, Giulio Pagliari, M. Biagi, R. Cusani
{"title":"Compound Markov Random Field Model of Signals on Graph: An Application to Graph Learning","authors":"S. Colonnese, Giulio Pagliari, M. Biagi, R. Cusani","doi":"10.1109/EUVIP.2018.8611758","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611758","url":null,"abstract":"In this work we address the problem of Signal on Graph (SoG) modeling, which can provide a powerful image processing tool for suitable SoG construction. We propose a novel SoG Markovian model suited to jointly characterizing the graph signal values and the graph edge processes. Specifically, we resort to the compound MRF called pixel-edge model formerly introduced in natural images modeling and we reformulate it to frame SoG modeling. We derive the Maximum A Posteriori Laplacian estimator associated to the compound MRF, and we show that it encompasses simpler state-of-the-art estimators for proper parameter settings. Numerical simulations show that the Maximum A Priori Laplacian estimator based on the proposed model outperforms state-of-the-art competitors under different respects. The Spectral Graph Wavelet Transform basis associated to the Maximum A Priori Laplacian estimation guarantees excellent compression of the given SoG. These results show that the compound MRF represents a powerful theoretical tool to characterize the strong and rich interactions that can be found between the signal values and the graph structures, and pave the way to its application to various SoG problems.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126285659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A Neural Network-based Approach for Public Transportation Prediction with Traffic Density Matrix 基于神经网络的交通密度矩阵公共交通预测方法
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611683
Dancho Panovski, Veronica Scurtu, T. Zaharia
{"title":"A Neural Network-based Approach for Public Transportation Prediction with Traffic Density Matrix","authors":"Dancho Panovski, Veronica Scurtu, T. Zaharia","doi":"10.1109/EUVIP.2018.8611683","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611683","url":null,"abstract":"In today's modern cities, mobility is of crucial importance, and public transportation is particularly concerned. The main objective is to propose solutions to a given, practical problem, which specifically concerns the bus arrival time at various bus stop stations, by taking to account local traffic conditions. We show that a global prediction approach, under some global macro-parameters (e.g., total number of vehicles or pedestrians) is not feasible. This observation leads us to the introduction of a finer granularity approach, where the traffic conditions are represented in terms of a traffic density matrix. Under this new paradigm, the experimental results obtained with both linear and neural networks (NN) approaches show promising prediction performances. Thus, the NN approach yields 24% more accurate prediction performances than a basic, linear regression.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126744029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Proceedings of the 2018 7th European Workshop on Visual Information Processing (EUVIP) 2018第七届欧洲视觉信息处理研讨会论文集
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/euvip.2018.8611752
{"title":"Proceedings of the 2018 7th European Workshop on Visual Information Processing (EUVIP)","authors":"","doi":"10.1109/euvip.2018.8611752","DOIUrl":"https://doi.org/10.1109/euvip.2018.8611752","url":null,"abstract":"","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125727518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Human Identification Method by Gait using Dynamics of Feature Points and Local Shape Features 基于特征点动力学和局部形状特征的步态识别方法
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611680
Daisuke Imoto, K. Kurosawa, K. Tsuchiya, K. Kuroki, Manato Hirabayashi, N. Akiba, H. Kakuda, K. Tanabe, Yoshinori Hawai
{"title":"A Novel Human Identification Method by Gait using Dynamics of Feature Points and Local Shape Features","authors":"Daisuke Imoto, K. Kurosawa, K. Tsuchiya, K. Kuroki, Manato Hirabayashi, N. Akiba, H. Kakuda, K. Tanabe, Yoshinori Hawai","doi":"10.1109/EUVIP.2018.8611680","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611680","url":null,"abstract":"Gait analysis has been recently evolving techniques by which we can identify individuals using their gait patterns. One of appearance-based methods based on GEI (Gait Energy Image) has been used for forensic purpose in Japan. As long as the condition of two footages is close to an ideal case, which means the two footages are the same view-angle, clothing, resolution, frame-rate and stable walking, this method has been very useful so far. However, if the condition becomes far from an ideal case, the identification accuracy has become dropped down, resulting in analysis impossible. Here, we construct a novel human identification method based on comparison of dynamic features, which takes advantage of features of both appearance-based method and model-based method. Feature points (resemble to joint-points in model-based method) and those local shape features are semi-automatically extracted from silhouette sequences, and then the matching probability of two footages is calculated by comparing the dynamics of extracted features. It is found that GEI-based method is more useful in cases of frontal view, low resolution and comparison of multi view-angles, whereas the proposed method is more useful in cases of lateral view, low frame-rate and clothing variation condition. The results suggested that GEI-based method is superior to characterizing ‘figure’ information, whereas the proposed method is superior to characterizing ‘dynamic’ information of human gait.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122779034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Choice of the Parameter for BM3D Denoising Algorithm Using No- Reference Metric 无参考度量BM3D去噪算法参数的选择
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611679
N. Mamaev, D. Yurin, A. Krylov
{"title":"Choice of the Parameter for BM3D Denoising Algorithm Using No- Reference Metric","authors":"N. Mamaev, D. Yurin, A. Krylov","doi":"10.1109/EUVIP.2018.8611679","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611679","url":null,"abstract":"An automatic multiscale algorithm for Block-matching and 3D filtering (BM3D) method de noising parameter selection has been proposed. To optimize the filtering parameter the presence of retained structures in the ridge areas is analysed for the difference of the initial noisy and filtered images. Appearance of regular components on method noise is controlled using mutual information. An estimation of image characteristic details is based on Hessian matrix eigenvalues analysis. Images with added controlled Gaussian noise from general image TID2013 and BSDS500 databases were used for testing. It was found that the proposed no-reference metric outperforms existing no-reference metrics in selecting optimal denoising parameter. Algorithm calculation time does not depend on the image noise level and looks promising to be used in image adaptive BM3D-based methods.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127159936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
DCT-Tensor-Net for Solar Flares Detection on IRIS Data 基于IRIS数据的dct -张量网太阳耀斑探测
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611672
Denis Ullmann, S. Voloshynovskiy, L. Kleint, S. Krucker, M. Melchior, C. Huwyler, Brandon Panos
{"title":"DCT-Tensor-Net for Solar Flares Detection on IRIS Data","authors":"Denis Ullmann, S. Voloshynovskiy, L. Kleint, S. Krucker, M. Melchior, C. Huwyler, Brandon Panos","doi":"10.1109/EUVIP.2018.8611672","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611672","url":null,"abstract":"Flares are an eruptive phenomenon observed on the sun, which are major protagonists in space weather and can cause adverse effects such as disruptions in communication, power grid failure and damage of satellites. Our method answers the importance of the time component in some scientific video observations, especially for flare detection and the study is based on NASA's Interface Region Imaging Spectrograph (IRIS) observations of the sun since 2013, which consists of a very asymmetrical and unlabeled big data. For detecting and analyzing flares in our IRIS solar video observation data, we created a discrete cosine transform tool DCT- Tensor-Net which uses an empirically handcrafted harmonic representation of our video data. This is one of the first tools for detecting flares based on IRIS images. Our method reduces the false detections of flares by taking into consideration their specific local spatial and temporal patterns.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127900258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grid Warping Postprocessing for Linear Motion Blur in Images 网格翘曲后处理的线性运动模糊图像
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611700
A. Nasonov, Yakov Pchelintsev, A. Krylov
{"title":"Grid Warping Postprocessing for Linear Motion Blur in Images","authors":"A. Nasonov, Yakov Pchelintsev, A. Krylov","doi":"10.1109/EUVIP.2018.8611700","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611700","url":null,"abstract":"The paper presents a method for linear motion blur and out-of-focus blur suppression in photographic images. Conventional image deconvolution algorithms usually have a regularization parameter that specifies a trade-off between incomplete blur removal and high probability of artifacts like ringing and noise. The idea of the proposed image deblurring method is to apply grid warping approach to improve image sharpness after conventional image deconvolution algorithms used with strong regularization. Grid warping algorithm moves pixels in edge neighborhood area towards the edges making them sharper without introducing artifacts like ringing and noise. The proposed method is expected to have the same sharpness but decreased risk of artifacts compared to standalone image deconvolution. In order to validate the proposed scheme, we have applied it to artificially blurred images and images with real blur, with different levels of noise and blur radii, directions and lengths.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130289665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Hybrid Approach to Hand Detection and Type Classification in Upper-Body Videos 上半身视频中手部检测与类型分类的混合方法
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611755
Katerina Papadimitriou, G. Potamianos
{"title":"A Hybrid Approach to Hand Detection and Type Classification in Upper-Body Videos","authors":"Katerina Papadimitriou, G. Potamianos","doi":"10.1109/EUVIP.2018.8611755","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611755","url":null,"abstract":"Detection of hands in videos and their classification into left and right types are crucial in various human-computer interaction and data mining systems. A variety of effective deep learning methods have been proposed for this task, such as region-based convolutional neural networks (R-CNNs), however the large number of their proposal windows per frame deem them computationally intensive. For this purpose we propose a hybrid approach that is based on substituting the “selective search” R-CNN module by an image processing pipeline assuming visibility of the facial region, as for example in signing and cued speech videos. Our system comprises two main phases: preprocessing and classification. In the preprocessing stage we incorporate facial information, obtained by an AdaBoost face detector, into a skin-tone based segmentation scheme that drives Kalman filtering based hand tracking, generating very few candidate windows. During classification, the extracted proposal regions are fed to a CNN for hand detection and type classification. Evaluation of the proposed hybrid approach on four well-known datasets of gestures and signing demonstrates its superior accuracy and computational efficiency over the R-CNN and its variants.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132331745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Using Visible+NIR information for CNN face recognition 利用可见光+近红外信息进行CNN人脸识别
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611681
Sanae Boutarfass, B. Besserer
{"title":"Using Visible+NIR information for CNN face recognition","authors":"Sanae Boutarfass, B. Besserer","doi":"10.1109/EUVIP.2018.8611681","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611681","url":null,"abstract":"Basically every digital camera can acquire information that extends the visible spectrum since the sensor is sensitive to the near-infrared spectrum - and this sometimes requires only minor modifications to the device. So, using a conventional digital camera and stripping off the internal ICF (Infrared Cut-off Filter) filter, we use the captured Visible + NIR images (also called full-spectrum or VNIR images) for the classical face recognition problem. The camera stores the image as 3 channels RGB files, and training and evaluating a CNN with these full-spectrum images lead to surprisingly good results. On the contrary, doing the same with RGB+NIR (4 channels) does not perform as well. The paper shows that the contribution of the blue channel to this task is weak, and the recognition rate raises significantly when NIR is added to the channels, adding information and increasing signal to noise ratio especially in the blue channel.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"232 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132117920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On the Layer Selection in Small-Scale Deep Networks 小尺度深度网络中的层选择
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611738
A. Muravev, Jenni Raitoharju, M. Gabbouj
{"title":"On the Layer Selection in Small-Scale Deep Networks","authors":"A. Muravev, Jenni Raitoharju, M. Gabbouj","doi":"10.1109/EUVIP.2018.8611738","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611738","url":null,"abstract":"Deep learning algorithms (in particular Convolutional Neural Networks, or CNNs) have shown their superiority in computer vision tasks and continue to push the state of the art in the most difficult problems of the field. However, deep models frequently lack interpretability. Current research efforts are often focused on increasingly complex and computationally expensive structures. These can be either handcrafted or generated by an algorithm, but in either case the specific choices of individual structural elements are hard to justify. This paper aims to analyze statistical properties of a large sample of small deep networks, where the choice of layer types is randomized. The limited representational power of such models forces them to specialize rather than generalize, resulting in several distinct structural patterns. Observing the empirical performance of structurally diverse weaker models thus allows for some practical insight into the connection between the data and the choice of suitable CNN architectures.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114144195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信