2016 IEEE International Conference on Signal and Image Processing (ICSIP)最新文献

筛选
英文 中文
Nearly lossless HDR images compression by background image segmentation 基于背景图像分割的近无损HDR图像压缩
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888260
Lei Chen, Zhiming Wang
{"title":"Nearly lossless HDR images compression by background image segmentation","authors":"Lei Chen, Zhiming Wang","doi":"10.1109/SIPROCESS.2016.7888260","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888260","url":null,"abstract":"High-Dynamic Rang and high spatial resolutions image, such as 16bits X-ray image, is required in security and medical applications. However, transporting this kind of the image costs a lot in network bandwidth, storage capacity and transmission time. To solve the problem, efficient compression technology is raised. In this paper, we propose a nearly lossless compression algorithm with background segmentation for 16 bits X-ray image. Background and foreground of image are separated with a threshold calculated through detecting background peak of image gray histogram. Background pixels connected to image border are segmented as background by two pass run-length labeling. Then image background is compressed with Run-Length-encoding (RLE) and the foreground is encoded via Lempel-Ziv-77 (LZ77). To validate the proposed algorithm, we compare it with state-of-art compression algorithms LZ77, JPGE_LS, and arithmetic coding. Experimental results show that our algorithm obtains good performance both on compress ratio and speed with neglectable information loss.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"76 13","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131874489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A novel method of stone surface texture image recognition 一种新的石材表面纹理图像识别方法
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888241
Silan Huang, Shangping Zhong, Kaizhi Chen
{"title":"A novel method of stone surface texture image recognition","authors":"Silan Huang, Shangping Zhong, Kaizhi Chen","doi":"10.1109/SIPROCESS.2016.7888241","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888241","url":null,"abstract":"With the development of stone processing and sales, effective stone surface texture image recognition methods are needed. We proposed a new stone surface texture image recognition method based on texture and colour. We combine the following visual features: Gabor features which can well simulate the single cell sensing profile of mammalian visual neurons, The Grey-level Co-occurrence Matrices(GLCM) which describe image gray distribution characteristics and spatial location information, and HSV colour features which are consistent with human visual characteristics. In addition, for the sub-image of the stone surface texture image can contain its original image texture structure, this paper adopts the block training idea, subdividing original image into non-overlapping sub-images to multiply the number of training samples for SVM classifier. Extensive experimental results show that the proposed method has a reference value for the study of stone texture image recognition.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115224983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Robust visual tracking via an online multiple instance learning algorithm based on SIFT features 基于SIFT特征的在线多实例学习鲁棒视觉跟踪
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888229
Liu Yuepeng, Zhang Shuyan, Zhao Lirui, Wang Xiaochen
{"title":"Robust visual tracking via an online multiple instance learning algorithm based on SIFT features","authors":"Liu Yuepeng, Zhang Shuyan, Zhao Lirui, Wang Xiaochen","doi":"10.1109/SIPROCESS.2016.7888229","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888229","url":null,"abstract":"This paper presented a SIFT based multiple instance learning algorithm to deal with the problem of pose variation in the tracking process. The MIL algorithm learns weak classifiers by using instances in the positive and negative bags. Then, a strong classifier is generated by powerful weak classifiers which are selected by maximizing the inner product between the classifier and the maximum likelihood probability of instances. The method avoid computing bag probability and instance probability M times, which reduces computational time. In the traditional MIL, Haar-like features are used to represent instances, which often suffers from computational load. To deal with the problem, Harris operator is introduced to determine the outstanding SIFT features for representing an instance. Combining the Harris operator and SIFT features, the number of the extracted features are seriously deduced. Finally, the proposed algorithm is evaluated on several classical videos. The experiment results show that the method performs better than the traditional MIL algorithm and weighted MIL algorithm (WMIL).","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115388720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards robust ego-centric hand gesture analysis for robot control 面向机器人控制的鲁棒自我中心手势分析
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888345
Hongyong Song, Weijiang Feng, Naiyang Guan, Xuhui Huang, Zhigang Luo
{"title":"Towards robust ego-centric hand gesture analysis for robot control","authors":"Hongyong Song, Weijiang Feng, Naiyang Guan, Xuhui Huang, Zhigang Luo","doi":"10.1109/SIPROCESS.2016.7888345","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888345","url":null,"abstract":"Wearable device with an ego-centric camera would be the next generation device for human-computer interaction such as robot control. Hand gesture is a natural way of ego-centric human-computer interaction. In this paper, we present an ego-centric multi-stage hand gesture analysis pipeline for robot control which works robustly in the unconstrained environment with varying luminance. In particular, we first propose an adaptive color and contour based hand segmentation method to segment hand region from the ego-centric viewpoint. We then propose a convex U-shaped curve detection algorithm to precisely detect positions of fingertips. And parallelly, we utilize the convolutional neural networks to recognize hand gestures. Based on these techniques, we combine most information of hand to control the robot and develop a hand gesture analysis system on an iPhone and a robot arm platform to validate its effectiveness. Experimental result demonstrates that our method works perfectly on controlling the robot arm by hand gesture in real time.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114416636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Selective block based approach for neoplasm detection from T2-weighted brain MRIs 基于选择性块的t2加权脑mri肿瘤检测方法
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888242
N. Gupta, A. Seal, P. Bhatele, P. Khanna
{"title":"Selective block based approach for neoplasm detection from T2-weighted brain MRIs","authors":"N. Gupta, A. Seal, P. Bhatele, P. Khanna","doi":"10.1109/SIPROCESS.2016.7888242","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888242","url":null,"abstract":"A realistic challenge in neuroanatomy is to assist radiologists to detect the brain neoplasm at an early stage. This paper presents a fast and accurate Computer Aided Diagnosis (CAD) system based on selective block based approach for neoplasm (tumor) detection from T2-weighted brain MR images. The salient contribution of the presented work lies in a fast discrimination using selective block based approach. Local binary patterns are considered as features, which are trained by support vector machine. The experiments are performed on the dataset of 100 patients, in which 55 patients reported with brain tumor and rest as normal. The proposed CAD system achieves 99.67% accuracy with 100% sensitivity. The comparative studies on the same dataset report the outperformance of proposed CAD system by comparison with some of the existing system.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123538839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Design of data transmission system for speed measurement radar between ARM and FPGA based on embedded Linux 基于嵌入式Linux的测速雷达ARM与FPGA数据传输系统设计
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888281
X. Ye, Yafei Li, Yingying Du, Zhengyu Cai
{"title":"Design of data transmission system for speed measurement radar between ARM and FPGA based on embedded Linux","authors":"X. Ye, Yafei Li, Yingying Du, Zhengyu Cai","doi":"10.1109/SIPROCESS.2016.7888281","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888281","url":null,"abstract":"Based on embedded Linux system, data transmission system for weapon's exterior ballistic speed measurement radar is studied based on the ARM Cortex-A9, communication interface is designed based on ARM and FPGA, the specific hardware design and the character device driver design are proposed, using interrupts and DMA technology. Data can be transferred high-speed and correctly, the test result shows that the system works stable and has a good real-time performance.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124041586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A high accuracy coregistration method for TOPSAR interferometry 一种高精度TOPSAR干涉配准方法
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888238
Zhihua Song, Houjun Jiang
{"title":"A high accuracy coregistration method for TOPSAR interferometry","authors":"Zhihua Song, Houjun Jiang","doi":"10.1109/SIPROCESS.2016.7888238","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888238","url":null,"abstract":"This paper introduces a high accuracy coregistration method for synthetic aperture radar (SAR) interferometry acquired in Terrain Observation by Progressive Scans (TOPS) mode. Due to the large variation of Doppler centroid frequency in azimuth, even a coregistration error of 0.01 pixels will result in a phase ramp of approximately 37° within one burst. Our method begins with a geometric coregistration, using the precise orbit information and an external digital elevation model (DEM) to achieve good relative accuracy. Then, the remained coregistration error is estimated using the interferometric phase difference between the burst overlaps. The method is applied on Sentinel-1 TOPS interferometric data over the 2015 Mw 7.8 Nepal earthquake fault. The results prove the very high accuracy of our method.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129315775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A blindly cooperative spectrum sensing algorithms based on linear prediction for cognitive radio 一种基于线性预测的认知无线电盲合作频谱感知算法
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888361
Wen Yuanyuan, Song Dawei, Sun Wenfeng, Fan Xiaoyan, Li Dong, Luo Xi
{"title":"A blindly cooperative spectrum sensing algorithms based on linear prediction for cognitive radio","authors":"Wen Yuanyuan, Song Dawei, Sun Wenfeng, Fan Xiaoyan, Li Dong, Luo Xi","doi":"10.1109/SIPROCESS.2016.7888361","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888361","url":null,"abstract":"Spectrum sensing is one of the main parts in cognitive radio. The performance of the adopted spectrum sensing method is very important. When the primary user and the cognitive radio are in operation simultaneously, performance of spectrum sensing is damaged. We propose to apply the linear prediction for the blind source separation algorithm as an alternative for more conventional techniques. Simulation results show that the proposed method is feasibility and enhance the spectrum sensing performance comparing to the energy detection method in all experiments when the cognitive radio is in operation.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127414892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
On-line longitudinal rip detection of conveyor belts based on machine vision 基于机器视觉的传送带纵向撕裂在线检测
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888275
Yanli Yang, Yanfei Zhao, Changyun Miao, Lijuan Wang
{"title":"On-line longitudinal rip detection of conveyor belts based on machine vision","authors":"Yanli Yang, Yanfei Zhao, Changyun Miao, Lijuan Wang","doi":"10.1109/SIPROCESS.2016.7888275","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888275","url":null,"abstract":"Longitudinal rip of conveyor belts is a serious threat to safety production. Based on the machine vision technology, an algorithm used to find longitudinal rip of belts on-line from gray belt images directly is proposed. A gray image is first translated into a unidimensional vector. The unidimensional vector is further analyzed to obtain rip eigenfunctions. Then, faults of longitudinal rips are diagnosed by using the rip eigenfunction. The calculation of searching from the unidimensional vector is smaller than searching from the gray image. The validity of the proposed algorithm is testified by the testing results with some belt images.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127046653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Fusion of infrared and visual images through multiscale hybrid unidirectional total variation 基于多尺度混合单向全变的红外与视觉图像融合
2016 IEEE International Conference on Signal and Image Processing (ICSIP) Pub Date : 2016-08-01 DOI: 10.1109/SIPROCESS.2016.7888220
Yi Wang, Zhonghua Luo, Zhi-hai Xu, H. Feng, Qi Li, Yue-ting Chen
{"title":"Fusion of infrared and visual images through multiscale hybrid unidirectional total variation","authors":"Yi Wang, Zhonghua Luo, Zhi-hai Xu, H. Feng, Qi Li, Yue-ting Chen","doi":"10.1109/SIPROCESS.2016.7888220","DOIUrl":"https://doi.org/10.1109/SIPROCESS.2016.7888220","url":null,"abstract":"As an important research area in image analysis and computer vision, fusion of infrared and visible images aims at delivering an effective combination of image information from different sensors. Since the final fused image is the demonstration of fusion process, it should reveal both source images' vital information distinctly. To achieve this purpose, an image fusion method based on multiscale hybrid unidirectional total variation (MHUTV) and visual weight map(VWM) is proposed in this paper. The MHUTV combines the feature of extracting the details from images and the capacity of suppressing stripe noise, which leads to a more ideal visual effect. The MHUTV is a multiscale, unidirectional and self-adaption image decomposition method, which is used to fuse infrared and visible images in this paper. The visual weight map aims to reveal attention drawing distribution of human observer. It provides a subband fusion criterion, which can guarantee the highlighting of interesting area from infrared and visible images. Firstly, multiscale hybrid unidirectional total variation is discussed and used to decompose the source images into approximation subbands and detail subbands. Secondly, the approximation and details subbands are respectively fused by a fusion rule based on visual weight map. Finally, the fused subbands are combined into one image by implementing inverse MHUTV. The results of comparison experiments on different sets of images demonstrate the effectiveness of the proposed method.","PeriodicalId":142802,"journal":{"name":"2016 IEEE International Conference on Signal and Image Processing (ICSIP)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128988533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信