2018 7th European Workshop on Visual Information Processing (EUVIP)最新文献

筛选
英文 中文
Viewport-Aware Omnidirectional Video Streaming Using Visual Attention and Dynamic Tiles 使用视觉注意力和动态磁贴的视口感知全方位视频流
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611777
C. Ozcinar, J. Cabrera, A. Smolic
{"title":"Viewport-Aware Omnidirectional Video Streaming Using Visual Attention and Dynamic Tiles","authors":"C. Ozcinar, J. Cabrera, A. Smolic","doi":"10.1109/EUVIP.2018.8611777","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611777","url":null,"abstract":"In this paper, we introduce a new adaptive omnidirectional video (ODV) streaming system that uses visual attention (VA) maps, providing enhanced virtual reality (VR) video experiences. Our proposed method benefits from dynamic tiling and viewport-aware bitrate allocation algorithms. Our main contribution is utilizing the VA maps for deciding the tiling structure (i.e., tile scheme) per chunk and distributing a given bitrate budget to each tile in a viewport-aware way. For this, we first estimate viewport-based VA maps using the collected users' viewport trajectories. Then, an optimal pair of tiling scheme and unequal bitrate allocation for each tile of a given content is determined per chunk by calculating the expected viewport quality using our proposed VA-weighted objective quality measurement (OmniVA). We evaluate the proposed method performance with varying bandwidth conditions and viewport trajectories from different users. The results show that the proposed method significantly outperforms the existing tiled-based method in terms of viewport-PSNR.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132135948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Content-color-dependent screening (CCDS) using regular or irregular clustered-dot halftones 使用规则或不规则簇点半色调的内容色相关筛选(CCDS)
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611727
Altyngul Jumabayeva, T. Frank, Y. Ben-Shoshan, R. Ulichney, J. Allebach
{"title":"Content-color-dependent screening (CCDS) using regular or irregular clustered-dot halftones","authors":"Altyngul Jumabayeva, T. Frank, Y. Ben-Shoshan, R. Ulichney, J. Allebach","doi":"10.1109/EUVIP.2018.8611727","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611727","url":null,"abstract":"In our previous work, we have presented an HVS-based model for the superposition of two clustered-dot color halftones, which are widely used for electrophotographic printers due to their relatively poor print stability. The model helps us to decide what are the best color assignments for the two regular or irregular halftones that will minimize the perceived error [1]. After applying our model to the superposition of three and four clustered-dot color halftones, it was concluded that this color assignment plays a significant role in image quality. Moreover, for different combinations of colorant absorptance values, their corresponding best color assignments turn out to be different. Hence, in this paper we propose to apply different color assignments within the image depending on the local color and content of the image. If the image content locally has a high variance of color and texture, the artifacts due to halftoning will not be as visible as the artifacts in smooth areas of the image. Therefore, the focus of this paper is to detect smooth areas of the image and apply the best color assigments in those areas. In order to detect smooth areas of the image, it was decided to segment the image based on the color of the content. We used the well-known K-means clustering algorithm along with an edge detection algorithm in order to segment an image into clusters. We then used our spatiochromatic HVS-based model for the superposition of four halftones in order to search for the best color assignment in a particular cluster. This approach is primarily directed towards good quality rendering of large smooth areas, especially areas containing important memory colors, such as flesh tones. We believe that content-color-dependent screening can play an important role for developing high quality printed color images.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126654776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blind DCT-based prediction of image denoising efficiency using neural networks 基于盲dct的神经网络图像去噪效率预测
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611710
Oleksii S. Rubel, Andrii Rubel, V. Lukin, K. Egiazarian
{"title":"Blind DCT-based prediction of image denoising efficiency using neural networks","authors":"Oleksii S. Rubel, Andrii Rubel, V. Lukin, K. Egiazarian","doi":"10.1109/EUVIP.2018.8611710","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611710","url":null,"abstract":"Visual quality of digital images acquired by modern mobile cameras is crucial for consumers. Noise is one of the factors that can significantly reduce visual quality of acquired data. There are many image denoising methods able to efficiently suppress noise. However, often in practice denoising does not provide sufficient enhancement of images or even demonstrates visual quality reduction compared to observed noisy data. This paper considers the problem of prediction of denoising efficiency of images in a blind manner under additive white Gaussian noise condition. The proposed technique does not require a priori knowledge of a noise variance and uses a moderate amount of image data for analysis. The denoising efficiency prediction employs neural networks (all-to-all connected multi-layer perceptron)to create a regression model. Image statistics obtained in the spectral domain are used as input data and the state-of-the-art visual quality metrics are considered as outputs of the network. As a target denoising method, block matching and 3D filtering (BM3D)technique is used. It is demonstrated that the obtained neural networks are compact and overall prediction procedure is fast and has an appropriate accuracy to confidently answer to the question: “Do we need to denoise an image?” The full dataset, executable code and demo Android application is available at https://github.com/asrubel/EUVIP2018.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125549374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Efficient video streaming of 360° cameras in Unmanned Aerial Vehicles: an analysis of real video sources 无人机中360°摄像机的高效视频流:对真实视频源的分析
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611639
S. Colonnese, F. Cuomo, Ludovico Ferranti, T. Melodia
{"title":"Efficient video streaming of 360° cameras in Unmanned Aerial Vehicles: an analysis of real video sources","authors":"S. Colonnese, F. Cuomo, Ludovico Ferranti, T. Melodia","doi":"10.1109/EUVIP.2018.8611639","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611639","url":null,"abstract":"Video streaming data acquired by Unmanned Aerial Vehicles is an innovative service that will be leveraged by several applications ranging from entertainment and surveillance to disaster recovery. 360° cameras provide unprecedented visual information and enable services to a novel level of immersive experience. However, 360° video sources are not still fully characterized, and this holds especially true for drone mounted 360° video sources. This paper presents a thorough analysis of the video traffic associated to several 360° camera sequences, acquired by a pedestrian held camera as well as by a drone mounted camera in various environments and lighting conditions. A fine-grained rate distortion analysis is presented for both video frames and video chunks, thus making this study relevant for HTTP-based video streaming services. The analysis is completed by making publicly available a dataset of 360° video traffic traces that can be used for numerical simulations of Unmanned Aerial Vehicles providing 360° video streaming services.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131942937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
deimeq - A Deep Neural Network Based Hybrid No-reference Image Quality Model deimeq -基于深度神经网络的混合无参考图像质量模型
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611703
Steve Goering, A. Raake
{"title":"deimeq - A Deep Neural Network Based Hybrid No-reference Image Quality Model","authors":"Steve Goering, A. Raake","doi":"10.1109/EUVIP.2018.8611703","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611703","url":null,"abstract":"Current no reference image quality assessment models are mostly based on hand-crafted features (signal, computer vision, …) or deep neural networks. Using DNNs for image quality prediction leads to several problems, e.g. the input size is restricted; higher resolutions will increase processing time and memory consumption. Large inputs are handled by image patching and aggregation a quality score. In a pure patching approach connections between the sub-images are getting lost. Also, a huge dataset is required for training a DNN from scratch, though only small datasets with annotations are available. We provide a hybrid solution (deimeq) to predict image quality using DNN feature extraction combined with random forest models. Firstly, deimeq uses a pre-trained DNN for feature extraction in a hierarchical sub-image approach, this avoids a huge training dataset. Further, our proposed sub-image approach circumvents a pure patching, because of hierarchical connections between the sub-images. Secondly, deimeq can be extended using signal-based features from state-of-the art models. To evaluate our approach, we choose a strict cross-dataset evaluation with the Live-2 and TID2013 datasets with several pre-trained DNNs. Finally, we show that deimeq and variants of it perform better or similar than other methods.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130880116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
RBF Neural Network for Landmine Detection in H Yperspectral Imaging 基于RBF神经网络的高光谱成像地雷探测
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611652
Ihab Makki, R. Younes, Mahdi Khodor, Jihan Khoder, C. Francis, T. Bianchi, Patrick Rizk, M. Zucchetti
{"title":"RBF Neural Network for Landmine Detection in H Yperspectral Imaging","authors":"Ihab Makki, R. Younes, Mahdi Khodor, Jihan Khoder, C. Francis, T. Bianchi, Patrick Rizk, M. Zucchetti","doi":"10.1109/EUVIP.2018.8611652","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611652","url":null,"abstract":"In this work, we evaluate different classification algorithms used for multi-target detection in hyperspectral imaging. We took into consideration the scenario of landmine detection in which we compared the performance of each method in various cases. In addition, we introduced the detection of targets using artificial intelligence-based methods in order to obtain better detection performance together with target identification and estimation of its abundance. These algorithms were tested on various types of hyperspectral images where the spectra of the landmines were planted in different proportions in the hyperspectral scenes. The results show the advantage of using our training strategy for radial basis function neural networks (RBFNN) in order to detect, identify and estimate the abundance of the targets in hyperspectral images at the same time. Moreover, the proposed technique requires a comparable computational cost with respect to state of art target detection techniques.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131353761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A performance evaluation framework for video stabilization methods 视频稳定方法的性能评估框架
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611729
Wilko Guilluy, Azeddine Beghdadi, L. Oudre
{"title":"A performance evaluation framework for video stabilization methods","authors":"Wilko Guilluy, Azeddine Beghdadi, L. Oudre","doi":"10.1109/EUVIP.2018.8611729","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611729","url":null,"abstract":"This study aims at discussing both objective and subjective aspects of Video Stabilization Quality Assessment (VSQA). A corpus of various degraded videos representing different challenging scenarios and their corresponding processed outputs obtained by four representative methods of Video Stabilization techniques is dedicated to this study. The objective evaluation is restricted to four common VSQA metrics and a new one. The subjective experiments were performed in a laboratory controlled environment using pair wise comparison ranking protocol. Through the obtained results it is shown that the performance evaluation of VS methods is far from being understood and there is still a way to go before satisfactory approaches emerge. This contribution is a first step into the direction of filling this gap by proposing a video stabilization quality assessment methodology.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131787995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Segmented Autoencoders for Unsupervised Embedded Hyperspectral Band Selection 无监督嵌入式高光谱波段选择的分段自编码器
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611643
Julius Tschannerl, Jinchang Ren, J. Zabalza, S. Marshall
{"title":"Segmented Autoencoders for Unsupervised Embedded Hyperspectral Band Selection","authors":"Julius Tschannerl, Jinchang Ren, J. Zabalza, S. Marshall","doi":"10.1109/EUVIP.2018.8611643","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611643","url":null,"abstract":"One of the major challenges in hyperspectral imaging (HSI) is the selection of the most informative wavelengths within the vast amount of data in a hypercube. Band selection can reduce the amount of data and computational cost as well as counteracting the negative effects of redundant and erroneous information. In this paper, we propose an unsupervised, embedded band selection algorithm that utilises the deep learning framework. Autoencoders are used to reconstruct measured spectral signatures. By putting a sparsity constraint on the input weights, the bands that contribute most to the reconstruction can be identified and chosen as the selected bands. Additionally, segmenting the input data into several spectral regions and distributing the number of desired bands according to a density measure among these segments, the quality of the selected bands can be increased and the computational time reduced by training several autoencoders. Results on a benchmark remote sensing HSI dataset show that the proposed algorithm improves classification accuracy compared to other state of the art band selection algorithms and thereby builds the basis for a framework of embedded band selection in HSI.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124671304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Perceptual Video Content Analysis and Application to HEVC Quantization Refinement 感知视频内容分析及其在HEVC量化细化中的应用
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611749
K. Rouis, M. Larabi
{"title":"Perceptual Video Content Analysis and Application to HEVC Quantization Refinement","authors":"K. Rouis, M. Larabi","doi":"10.1109/EUVIP.2018.8611749","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611749","url":null,"abstract":"In this paper, we propose a set of perceptual features aiming to consistently describe the visual information. The measurement is performed in the complex frequency domain according to human visual system (HVS) mechanisms. The aim is to explore the performance of these features in a video coding scheme. Particularly, we consider the High Efficiency Video Coding (HEV C) standard as it introduces several efficient tools along with new coding structures. The quantization parameter (QP) is an essential factor that affects the coding performance and has a relationship the Lagrangian multiplier. Based on extracted measures, a perceptual factor is proposed to adjust the Lagrangian multiplier and subsequently, the QP is refined over the adjusted value. The achieved BD-rate savings over several resolutions of video sequences, using the Bjontegaard metric, show the promising coding efficiency of the proposed method with regard to an adequate rate-distortion (R-D) compromise. We opted for the Structural SIMilarity (SSIM) metric to carry out a perceptual R-D comparison. The R-D curves demonstrate that the obtained bitrate savings are associated to convenient quality measures, compared to HEVC anchor and a state-of-the-art QP refinement model.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132135759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Physical Distortion Identification and Removal in Document Images 文档图像物理畸变识别与去除研究
2018 7th European Workshop on Visual Information Processing (EUVIP) Pub Date : 2018-11-01 DOI: 10.1109/EUVIP.2018.8611786
Tan Lu, A. Dooms
{"title":"Towards Physical Distortion Identification and Removal in Document Images","authors":"Tan Lu, A. Dooms","doi":"10.1109/EUVIP.2018.8611786","DOIUrl":"https://doi.org/10.1109/EUVIP.2018.8611786","url":null,"abstract":"Physical distortions, next to digital artefacts, are commonly seen in document images. Their presence sabotages the optical character recognition (OCR) process which not only leads to a reduced amount of automatically retrievable content, but also deteriorates the performance of other document analysis algorithms that rely on layout analysis or content recognition. This paper proposes a method to identify and remove certain types of physical distortions from document images. By exploiting the intensity and spatial relation of distorted pixels, we construct a conditional random field (CRF) based method for distortion identification. Furthermore, a peak searching method is proposed so that the model parameters of the energy functions in the conditional probability are automatically learnt from the image. Discrimination of the pixels from original document content and those from physical noises is obtained by maximizing the conditional probability in the CRF model. Examples from real-life image samples demonstrate the effectiveness of the proposed method.","PeriodicalId":252212,"journal":{"name":"2018 7th European Workshop on Visual Information Processing (EUVIP)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114998040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信