2017 IEEE International Conference on Image Processing (ICIP)最新文献

筛选
英文 中文
Coding sensitive based approximation algorithm for power efficient VBS-DCT VLSI design in HEVC hardwired Intra encoder 基于编码敏感的高效节能VBS-DCT VLSI设计算法
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296833
Liangliang Chang, Zhenyu Liu, Xiangyang Ji, Dongsheng Wang
{"title":"Coding sensitive based approximation algorithm for power efficient VBS-DCT VLSI design in HEVC hardwired Intra encoder","authors":"Liangliang Chang, Zhenyu Liu, Xiangyang Ji, Dongsheng Wang","doi":"10.1109/ICIP.2017.8296833","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296833","url":null,"abstract":"High Efficiency Video Coding (HEVC), emerging as the latest video coding standard, obtained a 50% bit-rate reduction while maintaining the competitive visual quality as H.264/AVC. Rate-Distortion Optimization (RDO) is a computation intensive module in HEVC encoding. In specific, during Intra coding, RDO accounts for 62% of the overall encoding time. The 2-dimensional DCT is the most area and power consuming component for VLSI implementation of RDO module. In this paper, we decompose the matrix multiplication of DCT into several sparse butterfly structures in series. In addition, the computation and the storage of 25% high frequency coefficients are dropped by our approximation algorithm. The proposed algorithms are integrated in HM15.0. It is verified that our methods could save 15.9% time with 1.03% BDBR augment. We further implement the DCT VLSI design using TSMC 90nm standard cell library. In worst conditions (125°C, 0.9V), the power dissipation of our DCT is 12.7mW at the 311MHz maximum clock speed. As compared to the primitive design, we achieved 71.9% of hardware and 70.2% of power reductions.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127682158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust linear unmixing with enhanced sparsity 增强稀疏性的鲁棒线性解混
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296861
Alexandre Tiard, Laurent Condat, Lucas Drumetz, J. Chanussot, W. Yin, Xiaoxiang Zhu
{"title":"Robust linear unmixing with enhanced sparsity","authors":"Alexandre Tiard, Laurent Condat, Lucas Drumetz, J. Chanussot, W. Yin, Xiaoxiang Zhu","doi":"10.1109/ICIP.2017.8296861","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296861","url":null,"abstract":"Spectral unmixing is a central problem in hyperspectral imagery. It is usually assuming a linear mixture model. Solving this inverse problem, however, can be seriously impacted by a wrong estimation of the number of endmembers, a bad estimation of the endmembers themselves, the spectral variability of the endmembers or the presence of nonlinearities. These problems can result in a too large number of retained endmembers. We propose to tackle this problem by introducing a new formulation for robust linear unmixing enhancing sparsity. With a single tuning parameter the optimization leads to a range of behaviors: from the standard linear model (low sparsity) to a hard classification (maximal sparsity : only one endmember is retained per pixel). We solve the proposed new functional using a computationally efficient proximal primal dual method. The experimental study, including both realistic simulated data and real data demonstrates the versatility of the proposed approach.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116461022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
High quality reconstruction of dynamic objects using 2D-3D camera fusion 利用2D-3D相机融合实现动态物体的高质量重建
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296674
Cansen Jiang, Dennis Christie, D. Paudel, C. Demonceaux
{"title":"High quality reconstruction of dynamic objects using 2D-3D camera fusion","authors":"Cansen Jiang, Dennis Christie, D. Paudel, C. Demonceaux","doi":"10.1109/ICIP.2017.8296674","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296674","url":null,"abstract":"In this paper, we propose a complete pipeline for high quality reconstruction of dynamic objects using 2D-3D camera setup attached to a moving vehicle. Starting from the segmented motion trajectories of individual objects, we compute their precise motion parameters, register multiple sparse point clouds to increase the density, and develop a smooth and textured surface from the dense (but scattered) point cloud. The success of our method relies on the proposed optimization framework for accurate motion estimation between two sparse point clouds. Our formulation for fusing closest-point and consensus based motion estimations, respectively in the absence and presence of motion trajectories, is the key to obtain such accuracy. Several experiments performed on both synthetic and real (KITTI) datasets show that the proposed framework is very robust and accurate.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126637203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Age-dependent saccadic models for predicting eye movements 预测眼球运动的年龄依赖的跳眼模型
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296981
O. Meur, A. Coutrot, Adrien Le Roch, A. Helo, P. Rama, Zhi Liu
{"title":"Age-dependent saccadic models for predicting eye movements","authors":"O. Meur, A. Coutrot, Adrien Le Roch, A. Helo, P. Rama, Zhi Liu","doi":"10.1109/ICIP.2017.8296981","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296981","url":null,"abstract":"How people look at visual information reveals fundamental information about themselves, their interests and their state of mind. While previous visual attention models output static 2-dimensional saliency maps, saccadic models predict not only what observers look at but also how they move their eyes to explore the scene. Here we demonstrate that saccadic models are a flexible framework that can be tailored to emulate the gaze patterns from childhood to adulthood. The proposed age-dependent saccadic model not only outputs human-like, i.e. age-specific visual scanpath, but also significantly outperforms other state-of-the-art saliency models.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127526960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Deformation transfer of 3D human shapes and poses on manifolds 三维人体形状和姿态在流形上的变形转移
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296275
Abd El Rahman Shabayek, Djamila Aouada, Alexandre Saint, B. Ottersten
{"title":"Deformation transfer of 3D human shapes and poses on manifolds","authors":"Abd El Rahman Shabayek, Djamila Aouada, Alexandre Saint, B. Ottersten","doi":"10.1109/ICIP.2017.8296275","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296275","url":null,"abstract":"In this paper, we introduce a novel method to transfer the deformation of a human body to another directly on a manifold. There exists a rich literature on transferring deformations based on Euclidean representations. However, a 3D human shape and pose live on a manifold and have a Riemannian structure. The proposed method uses the Lie Bodies manifold representation of 3D triangulated bodies. Its benefits are preserved, namely, minimum required degrees of freedom for any triangle deformation and no heuristics to constrain excessive ones. We give a closed form solution for deformation transfer directly on the Lie Bodies. The deformations have strictly positive determinants ensuring that non-physical deformations are removed. We show examples on three datasets, and highlight differences with the Euclidean deformation transfer.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128614436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
On some global topological aspects of manifold learning 流形学习的一些全局拓扑问题
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296276
J. Manton, N. L. Bihan
{"title":"On some global topological aspects of manifold learning","authors":"J. Manton, N. L. Bihan","doi":"10.1109/ICIP.2017.8296276","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296276","url":null,"abstract":"With the dual purpose of helping place in perspective the diverse approaches to manifold learning, and facilitating future research, this paper steps back and describes the manifold learning problem from a holistic perspective. It is argued that getting the homology right can be crucial to successful classification schemes based on the intrinsic geometry of the learnt manifold, and furthermore, a purely Bayesian approach will not be able to succeed at this in general. Simple examples are given to illustrate the inherent limitations of manifold learning.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128416922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate tumor segmentation in FDG-PET images with guidance of complementary CT images 在互补CT图像的指导下,FDG-PET图像精确分割肿瘤
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8297123
C. Lian, S. Ruan, T. Denoeux, Yu Guo, P. Vera
{"title":"Accurate tumor segmentation in FDG-PET images with guidance of complementary CT images","authors":"C. Lian, S. Ruan, T. Denoeux, Yu Guo, P. Vera","doi":"10.1109/ICIP.2017.8297123","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297123","url":null,"abstract":"While hybrid PET/CT scanner is becoming a standard imaging technique in clinical oncology, many existing methods still segment tumor in mono-modality without consideration of complementary information from another modality. In this paper, we propose an unsupervised 3-D method to automatically segment tumor in PET images, where anatomical knowledge from CT images is included as critical guidance to improve PET segmentation accuracy. To this end, a specific context term is proposed to iteratively quantify the conflicts between PET and CT segmentation. In addition, to comprehensively characterize image voxels for reliable segmentation, informative image features are effectively selected via an unsupervised metric learning strategy. The proposed method is based on the theory of belief functions, a powerful tool for information fusion and uncertain reasoning. Its performance has been well evaluated by real-patient PET/CT images.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126971004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Accurate dense stereo matching for road scenes 精确密集的立体匹配道路场景
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296375
Oussama Zeglazi, M. Rziza, A. Amine, C. Demonceaux
{"title":"Accurate dense stereo matching for road scenes","authors":"Oussama Zeglazi, M. Rziza, A. Amine, C. Demonceaux","doi":"10.1109/ICIP.2017.8296375","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296375","url":null,"abstract":"Stereo matching task is the core of applications linked to the intelligent vehicles. In this paper, we present a new variant function of the Census Transform (CT) which is more robust against radiometric changes in real road scenes. We demonstrate that the proposed cost function outperforms the conventional cost functions using the KITTI benchmark1. The cost aggregation method is also updated for taking into account the edge information. This enables to improve significantly the aggregated costs especially within homogenous regions. The Winner-Takes-All (WTA) strategy is used to compute disparity values. To further eliminate the remainder matching ambiguities, a post-processing step is performed. Experiments were conducted on the new Middlebury2 dataset, as well as on the real road traffic scenes of the KITTI database. Obtained disparity results have demonstrated that the proposed method is promising.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125100422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
From neonatal to adult brain MR image segmentation in a few seconds using 3D-like fully convolutional network and transfer learning 使用3d全卷积网络和迁移学习在几秒钟内完成从新生儿到成人的脑MR图像分割
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8297117
Yongchao Xu, T. Géraud, I. Bloch
{"title":"From neonatal to adult brain MR image segmentation in a few seconds using 3D-like fully convolutional network and transfer learning","authors":"Yongchao Xu, T. Géraud, I. Bloch","doi":"10.1109/ICIP.2017.8297117","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297117","url":null,"abstract":"Brain magnetic resonance imaging (MRI) is widely used to assess brain development in neonates and to diagnose a wide range of neurological diseases in adults. Such studies are usually based on quantitative analysis of different brain tissues, so it is essential to be able to classify them accurately. In this paper, we propose a fast automatic method that segments 3D brain MR images into different tissues using fully convolutional network (FCN) and transfer learning. As compared to existing deep learning-based approaches that rely either on 2D patches or on fully 3D FCN, our method is way much faster: it only takes a few seconds, and only a single modality (T1 or T2) is required. In order to take the 3D information into account, all 3 successive 2D slices are stacked to form a set of 2D “color” images, which serve as input for the FCN pre-trained on ImageNet for natural image classification. To the best of our knowledge, this is the first method that applies transfer learning to segment both neonatal and adult brain 3D MR images. Our experiments on two public datasets show that our method achieves state-of-the-art results.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130953403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 40
Visual comfort assessment of stereoscopic images using deep visual and disparity features based on human attention 利用基于人类注意力的深度视觉和差异特征评估立体图像的视觉舒适度
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-17 DOI: 10.1109/ICIP.2017.8296374
Hyunwook Jeong, Hak Gu Kim, Yong Man Ro
{"title":"Visual comfort assessment of stereoscopic images using deep visual and disparity features based on human attention","authors":"Hyunwook Jeong, Hak Gu Kim, Yong Man Ro","doi":"10.1109/ICIP.2017.8296374","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296374","url":null,"abstract":"This paper proposes a novel visual comfort assessment (VCA) for stereoscopic images using deep learning. To predict visual discomfort of human visual system in stereoscopic viewing, we devise VCA deep networks to latently encode perceptual cues, which are visual differences between stereoscopic images and human attention-based disparity magnitude and gradient information. To extract the visual difference features from left and right views, a Siamese network is employed. In addition, human attention region-based disparity magnitude and gradient maps are fed to two individual deep convolutional neural networks (DCNNs) for disparity-related features based on human visual system (HVS). Finally, by aggregating these perceptual features, the proposed method directly predicts the final visual comfort score. Extensive and comparative experiments have been conducted on IEEE-SA dataset. Experimental results show that the proposed method can yield excellent correlation performance compared to existing methods.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130230781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信