2017 IEEE International Conference on Image Processing (ICIP)最新文献

筛选
英文 中文
Circle detection by arc-support line segments 圆弧支撑线段圆检测
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-11-29 DOI: 10.1109/ICIP.2017.8296246
Changsheng Lu, Siyu Xia, Wanming Huang, Ming Shao, Y. Fu
{"title":"Circle detection by arc-support line segments","authors":"Changsheng Lu, Siyu Xia, Wanming Huang, Ming Shao, Y. Fu","doi":"10.1109/ICIP.2017.8296246","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296246","url":null,"abstract":"Circle detection is fundamental in both object detection and high accuracy localization in visual control systems. We propose a novel method for circle detection by analysing and refining arc-support line segments. The key idea is to use line segment detector to extract the arc-support line segments which are likely to make up the circle, instead of all line segments. Each couple of line segments is analyzed to form a valid pair and followed by generating initial circle set. Through the mean shift clustering, the circle candidates are generated and verified based on the geometric attributes of circle edge. Finally, twice circle fitting is applied to increase the accuracy for circle locating and radius measuring. The experimental results demonstrate that the proposed method performs better than other well known approaches on circles that are incomplete, occluded, blurry and over-illumination. Moreover, our method shows significant improvement in accuracy, robustness and efficiency on the industrial Printed Circuit Board (PCB) images as well as the synthesized, natural and complicated images.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"218 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115278733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
Appearance and motion based deep learning architecture for moving object detection in moving camera 基于外观和运动的运动目标检测深度学习架构
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-11-03 DOI: 10.1109/ICIP.2017.8296597
Byeongho Heo, Kimin Yun, J. Choi
{"title":"Appearance and motion based deep learning architecture for moving object detection in moving camera","authors":"Byeongho Heo, Kimin Yun, J. Choi","doi":"10.1109/ICIP.2017.8296597","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296597","url":null,"abstract":"Background subtraction from the given image is a widely used method for moving object detection. However, this method is vulnerable to dynamic background in a moving camera video. In this paper, we propose a novel moving object detection approach using deep learning to achieve a robust performance even in a dynamic background. The proposed approach considers appearance features as well as motion features. To this end, we design a deep learning architecture composed of two networks: an appearance network and a motion network. The two networks are combined to detect moving object robustly to the background motion by utilizing the appearance of the target object in addition to the motion difference. In the experiment, it is shown that the proposed method achieves 50 fps speed in GPU and outperforms state-of-the-art methods for various moving camera videos.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124970942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Fitness heart rate measurement using face videos 使用面部视频测量健身心率
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-10-19 DOI: 10.1109/ICIP.2017.8296632
Qiang Zhu, Chau-Wai Wong, C. Fu, Min Wu
{"title":"Fitness heart rate measurement using face videos","authors":"Qiang Zhu, Chau-Wai Wong, C. Fu, Min Wu","doi":"10.1109/ICIP.2017.8296632","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296632","url":null,"abstract":"Recent studies showed that subtle changes in human's face color due to the heartbeat can be captured by digital video recorders. Most existing work focused on still/rest cases or those with relatively small motions. In this work, we propose a heart-rate monitoring method for fitness exercise videos. We focus on designing a highly precise motion compensation scheme with the help of the optical flow, and use motion information as a cue to adaptively remove ambiguous frequency components for improving the heart rates estimates. Experimental results show that our proposed method can achieve highly precise estimation with an average error of 1.1 beats per minute (BPM) or 0.58% in relative error.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121534469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
ADMM penalty parameter selection with krylov subspace recycling technique for sparse coding 基于krylov子空间回收技术的稀疏编码ADMM惩罚参数选择
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-10-03 DOI: 10.1109/ICIP.2017.8296621
Youzuo Lin, B. Wohlberg, V. Vesselinov
{"title":"ADMM penalty parameter selection with krylov subspace recycling technique for sparse coding","authors":"Youzuo Lin, B. Wohlberg, V. Vesselinov","doi":"10.1109/ICIP.2017.8296621","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296621","url":null,"abstract":"Sparse representations are widely used in a broad variety of fields. A number of different methods have been proposed to solve the sparse coding problem, of which the alternating direction method of multipliers (ADMM) is one of the most popular. One of the disadvantages of this method, however, is the need to select an algorithm parameter, the penalty parameter, that has a significant effect on the rate of convergence of the algorithm. Although a number of heuristic methods have been proposed, as yet there is no general theory providing a good choice of this parameter for all problems. One obvious approach would be to try a number of different parameters at each iteration, proceeding further with the one that delivers the best reduction in functional value, but this would involve a substantial increase in computational cost. We show that, when solving the sparse coding problem for a dictionary corresponding to an operator with a fast transform, requiring iterative methods to solve the main linear system arising in the ADMM solution, it is possible to explore a large range of parameters at marginal additional cost, thus greatly improving the robustness of the method to the choice of penalty parameter.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130830629","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Gland segmentation guided by glandular structures: A level set framework with two levels 由腺体结构引导的腺体分割:一个有两个层次的水平集框架
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-10-03 DOI: 10.1109/ICIP.2017.8296663
Chen Wang, J. Bao, H. Bu
{"title":"Gland segmentation guided by glandular structures: A level set framework with two levels","authors":"Chen Wang, J. Bao, H. Bu","doi":"10.1109/ICIP.2017.8296663","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296663","url":null,"abstract":"Pathologic diagnosis is the gold standard of clinical diagnosis. The identification and segmentation of histological structures are the prerequisites to disease diagnosis. In clinic, doctors often suffer from time consuming and the disagreements from different doctors about observation results. Hence, an automatic precise segmentation method is important for auxiliary diagnosis. We propose a level set framework using 0, k level set representing the boundary of lumen regions and epithelial layers for gland segmentation. The validation has been performed on clinical data of West China Hospital, Sichuan University. The experiment results show that our method has a better performance and is robust to the shape variety of endometrial glands.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129815121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BAFT: Binary affine feature transform 二进制仿射特征变换
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-27 DOI: 10.1109/ICIP.2017.8296797
J. Arnfred, Viet Dung Nguyen, Stefan Winkler
{"title":"BAFT: Binary affine feature transform","authors":"J. Arnfred, Viet Dung Nguyen, Stefan Winkler","doi":"10.1109/ICIP.2017.8296797","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296797","url":null,"abstract":"We introduce BAFT, a fast binary and quasi affine invariant local image feature. It combines the affine invariance of Harris Affine feature descriptors with the speed of binary descriptors such as BRISK and ORB. BAFT derives its speed and precision from sampling local image patches in a pattern that depends on the second moment matrix of the same image patch. This approach results in a fast but discriminative descriptor, especially for image pairs with large perspective changes. Our evaluation on 40 different image pairs shows that BAFT increases the area under the precision/recall curve (AUC) compared to traditional descriptors for the majority of image pairs. In addition we show that this improvement comes with a very low performance penalty compared to the similar ORB descriptor. The BAFT source code is available for download.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124822006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Virtual reality content streaming: Viewport-dependent projection and tile-based techniques 虚拟现实内容流:视口依赖投影和基于磁贴的技术
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-27 DOI: 10.1109/ICIP.2017.8296518
Alireza Zare, A. Aminlou, M. Hannuksela
{"title":"Virtual reality content streaming: Viewport-dependent projection and tile-based techniques","authors":"Alireza Zare, A. Aminlou, M. Hannuksela","doi":"10.1109/ICIP.2017.8296518","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296518","url":null,"abstract":"Virtual reality (VR) head-mounted display (HMD) requires spherical panoramic contents with high-spatial and temporal fidelity to immerse the viewers into the captured scene. Hereby, VR contents are extremely bandwidth intensive and impose technical challenges for the design of a VR streaming system. A bandwidth-efficient VR streaming system can be achieved using the viewport-aware adaptation techniques, in which part of the sphere within the viewer's field of view is presented at higher quality. In this paper, two recently emerged viewport-adaptive streaming methods so-called tile-based method and truncated square pyramid (TSP) projection, a well-studied viewport-dependent projection, are compared using a proposed quality assessment methodology. The comparison is made in terms of storage and streaming bitrate performances. The simulation results indicate that the tile-based approach has slightly lower streaming performance, while offering a significant storage and encoding time saving at the server side, compared to TSP-based streaming.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122729003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Highly parallel HEVC motion estimation based on multiple temporal predictors and nested diamond search 基于多时间预测和嵌套菱形搜索的高度并行HEVC运动估计
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-26 DOI: 10.1109/ICIP.2017.8296782
E. Hojati, Jean-François Franche, S. Coulombe, C. Vázquez
{"title":"Highly parallel HEVC motion estimation based on multiple temporal predictors and nested diamond search","authors":"E. Hojati, Jean-François Franche, S. Coulombe, C. Vázquez","doi":"10.1109/ICIP.2017.8296782","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296782","url":null,"abstract":"Rate-constrained motion estimation (RCME) is the most computationally intensive task of H.265/HEVC encoding. Massively parallel architectures, such as graphics processing units (GPUs), used in combination with a multi-core central processing unit (CPU), provide a promising computing platform to achieve fast encoding. However, the dependencies in deriving motion vector predictors (MVPs) prevent the parallelization of prediction units (PUs) processing at a frame level. Moreover, the conditional execution structure of typical fast search algorithms is not suitable for GPUs designed for data-intensive parallel problems. In this paper, we propose a novel highly parallel RCME method based on multiple temporal motion vector (MV) predictors and a new fast nested diamond search (NDS) algorithm well-suited for a GPU. The proposed framework provides fine-grained encoding parallelism. Experimental results show that our approach provides reduced GPU load with better BD-Rate compared to prior full search parallel methods based on a single MV predictor.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126665371","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Joint demosaicing and denoising of noisy bayer images with ADMM 基于ADMM的噪声拜耳图像联合去马赛克与去噪
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-25 DOI: 10.1109/ICIP.2017.8296823
Hanlin Tan, Xiangrong Zeng, Shiming Lai, Yu Liu, Maojun Zhang
{"title":"Joint demosaicing and denoising of noisy bayer images with ADMM","authors":"Hanlin Tan, Xiangrong Zeng, Shiming Lai, Yu Liu, Maojun Zhang","doi":"10.1109/ICIP.2017.8296823","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8296823","url":null,"abstract":"Image demosaicing and denoising are import steps of image signal processing. Sequential executions of demosaicing and denoising have essential drawbacks that they degrade the results of each other. Joint demosaicing and denoising overcomes the difficulties by solving the two problems in one model. This paper introduces a unified object function with hidden priors and a variant of ADMM to recover a full-resolution color image with a noisy Bayer input. Experimental results demonstrate that our method performs better than state-of-the-art methods in both PSNR comparison and human vision. In addition, our method is much more robust to variations of noise level.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128097107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 47
Plant leaf segmentation for estimating phenotypic traits 植物叶片分割估计表型性状
2017 IEEE International Conference on Image Processing (ICIP) Pub Date : 2017-09-25 DOI: 10.1109/ICIP.2017.8297010
Yuhao Chen, Javier Ribera, C. Boomsma, E. Delp
{"title":"Plant leaf segmentation for estimating phenotypic traits","authors":"Yuhao Chen, Javier Ribera, C. Boomsma, E. Delp","doi":"10.1109/ICIP.2017.8297010","DOIUrl":"https://doi.org/10.1109/ICIP.2017.8297010","url":null,"abstract":"In this paper we propose a method to segment individual leaves of crop plants from Unmanned Aerial Vehicle (UAV) imagery for the purposes of deriving phenotypic properties of the plant. The crop plant used in our study is sorghum [Sorghum bicolor (L.) Moench]. Phenotyping is a set of methodologies for analyzing and obtaining characteristic traits of a plant. In a phenotypic study, leaves are often used to estimate traits such as individual leaf area and Leaf Area Index (LAI). Our approach is to segment the leaves in polar coordinates using the plant center as the origin. The shape of each leaf is estimated by a shape model. Experimental results indicate that this approach can provide good estimates of leaf phenotypic properties.","PeriodicalId":229602,"journal":{"name":"2017 IEEE International Conference on Image Processing (ICIP)","volume":"33 15","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120811810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信