2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)最新文献

筛选
英文 中文
PolSAR image classification using discriminative clustering 基于判别聚类的PolSAR图像分类
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958798
Haixia Bi, Jian Sun, Zongben Xu
{"title":"PolSAR image classification using discriminative clustering","authors":"Haixia Bi, Jian Sun, Zongben Xu","doi":"10.1109/RSIP.2017.7958798","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958798","url":null,"abstract":"This paper presents a novel unsupervised image classification method for polarimetric synthetic aperture radar (PolSAR) data. The proposed method is based on a discriminative clustering framework that explicitly relies on a discriminative supervised classification technique to perform unsupervised clustering. To implement this idea, we design an energy function for unsupervised PolSAR image classification by combining a supervised softmax regression model with a Markov Random Field (MRF) smoothness constraint. In this model, both the pixel-wise class labels and classifiers are taken as unknown variables to be optimized. Starting from the initialized class labels generated by Cloude-Pottier decomposition and K-Wishart distribution hypothesis, we iteratively optimize the classifiers and class labels by alternately minimizing the energy function w.r.t. them. Finally, the optimized class labels are taken as the classification result, and the classifiers for different classes are also derived as a side effect. We apply this approach to real PolSAR benchmark data. Extensive experiments justify that our approach can effectively classify the PolSAR image in an unsupervised way, and produce higher accuracies than the compared state-of-the-art methods.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122855545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An enhanced deep convolutional neural network for densely packed objects detection in remote sensing images 基于深度卷积神经网络的遥感图像密集目标检测
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958800
Zhipeng Deng, Lin Lei, Hao Sun, H. Zou, Shilin Zhou, Juanping Zhao
{"title":"An enhanced deep convolutional neural network for densely packed objects detection in remote sensing images","authors":"Zhipeng Deng, Lin Lei, Hao Sun, H. Zou, Shilin Zhou, Juanping Zhao","doi":"10.1109/RSIP.2017.7958800","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958800","url":null,"abstract":"Faster Region based convolutional neural networks (FRCN) has shown great success in object detection in recent years. However, its performance will degrade on densely packed objects in real remote sensing applications. To address this problem, an enhanced deep CNN based method is developed in this paper. Following the common pipeline of “CNN feature extraction + region proposal + Region classification”, our method is primarily based on the latest Residual Networks (ResNets) and consists of two sub-networks: an object proposal network and an object detection network. For detecting densely packed objects, the output of multi-scale layers are combined together to enhance the resolution of the feature maps. Our method is trained on the VHR-10 data set with limited samples and successfully tested on large-scale google earth images, such as aircraft boneyard or tank farm, containing a substantial number of densely packed objects.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121427699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
Airborne Ka-band digital beamforming SAR system and flight test 机载ka波段数字波束形成SAR系统及飞行试验
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958791
Hui Wang, Shoulun Dai, Shichao Zheng
{"title":"Airborne Ka-band digital beamforming SAR system and flight test","authors":"Hui Wang, Shoulun Dai, Shichao Zheng","doi":"10.1109/RSIP.2017.7958791","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958791","url":null,"abstract":"In this paper, a Ka-band digital beamforming (DBF) SAR demonstrator system is introduced which successfully demonstrated in 2016. The system has two modes, strip mode and GMTI mode. In GMTI mode, the receiving antenna of the demonstrator is divided into 24 channels. 8 channels in range are used to realize DBF-SCORE technique which can improve the signal to noise ratio of the system, and 3 channels in azimuth are used to realize GMTI. The system design and the architecture of the system are described. Finally, the flight test results are presented, and some real data processing results are given.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131769106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fast vehicle detection in UAV images 无人机图像中的快速车辆检测
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958795
Tianyu Tang, Zhipeng Deng, Shilin Zhou, Lin Lei, H. Zou
{"title":"Fast vehicle detection in UAV images","authors":"Tianyu Tang, Zhipeng Deng, Shilin Zhou, Lin Lei, H. Zou","doi":"10.1109/RSIP.2017.7958795","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958795","url":null,"abstract":"Fast and accurate vehicle detection in unmanned aerial vehicle (UAV) images remains a challenge, due to its very high spatial resolution and very few annotations. Although numerous vehicle detection methods exist, most of them cannot achieve real-time detection for different scenes. Recently, deep learning algorithms has achieved fantastic detection performance in computer vision, especially regression based convolutional neural networks YOLOv2. It's good both at accuracy and speed, outperforming other state-of-the-art detection methods. This paper for the first time aims to investigate the use of YOLOv2 for vehicle detection in UAV images, as well as to explore the new method for data annotation. Our method starts with image annotation and data augmentation. CSK tracking method is used to help annotate vehicles in images captured from simple scenes. Subsequently, a regression based single convolutional neural network YOLOv2 is used to detect vehicles in UAV images. To evaluate our method, UAV video images were taken over several urban areas, and experiments were conducted on this dataset and Stanford Drone dataset. The experimental results have proven that our data preparation strategy is useful, and YOLOv2 is effective for real-time vehicle detection of UAV video images.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129443809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 55
A modified faster R-CNN based on CFAR algorithm for SAR ship detection 基于CFAR算法的改进更快R-CNN SAR舰船检测
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958815
Miao Kang, Xiangguang Leng, Zhao Lin, K. Ji
{"title":"A modified faster R-CNN based on CFAR algorithm for SAR ship detection","authors":"Miao Kang, Xiangguang Leng, Zhao Lin, K. Ji","doi":"10.1109/RSIP.2017.7958815","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958815","url":null,"abstract":"SAR ship detection is essential to marine monitoring. Recently, with the development of the deep neural network and the spring of the SAR images, SAR ship detection based on deep neural network has been a trend. However, the multi-scale ships in SAR images cause the undesirable differences of features, which decrease the accuracy of ship detection based on deep learning methods. Aiming at this problem, this paper modifies the Faster R-CNN, a state-of-the-art object detection networks, by the traditional constant false alarm rate (CFAR). Taking the objects proposals generated by Faster R-CNN for the guard windows of CFAR algorithm, this method picks up the small-sized targets. By reevaluating the bounding boxes which have relative low classification scores in detection network, this method gain better performance of detection.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129044529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 168
Monitoring ghost cities at prefecture level from multi-source remote sensing data 地级鬼城多源遥感监测
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958810
Xiaolong Ma, Zhaoting Ma, X. Tong, Sicong Liu
{"title":"Monitoring ghost cities at prefecture level from multi-source remote sensing data","authors":"Xiaolong Ma, Zhaoting Ma, X. Tong, Sicong Liu","doi":"10.1109/RSIP.2017.7958810","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958810","url":null,"abstract":"Monitoring urban spatial information is important to hold the process of urbanization for keeping balance between the human activity and the environment. To promote the application extent of the remote sensing technology in the topic of ghost cities, an effective method was proposed to monitor and evaluate “ghost city” phenomenon in the prefecture level city of China by taking advantage of multi-source remote sensing datasets, namely the Defense Meteorological Satellite Program-Operational Linescan System (DMSP-OLS) nighttime light data and other auxiliary data such as Landsat images and the Land-cover/use datasets. Based on several indexes related urban expansion and landscape pattern, experiments were conducted by using the proposed approach in Weihai, as classified by statistics and Landsat images. Compared with the Optimized-Sample-Selection (OSS) method, the proposed method achieved better performance with respect to relative less errors and better visual display of the spatial dynamics of urban expansion in Weihai during the year of 2000–2010, so as to reveal the specific characteristics of urban expansion patterns in those periods.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126692229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The development of deep learning in synthetic aperture radar imagery 合成孔径雷达图像中深度学习的发展
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958802
C. Schwegmann, W. Kleynhans, B. P. Salmon
{"title":"The development of deep learning in synthetic aperture radar imagery","authors":"C. Schwegmann, W. Kleynhans, B. P. Salmon","doi":"10.1109/RSIP.2017.7958802","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958802","url":null,"abstract":"The usage of remote sensing to observe environments necessitates interdisciplinary approaches to derive effective, impactful research. One remote sensing technique, Synthetic Aperture Radar, has shown significant benefits over traditional remote sensing techniques but comes at the price of additional complexities. To adequately cope with these, researchers have begun to employ advanced machine learning techniques known as deep learning to Synthetic Aperture Radar data. Deep learning represents the next stage in the evolution of machine intelligence which places the onus of identifying salient features on the network rather than researcher. This paper will outline machine learning techniques as it has been used previously on SAR; what is deep learning and where it fits in compared to traditional machine learning; what benefits can be derived by applying it to Synthetic Aperture Radar imagery; and finally describe some obstacles that still need to be overcome in order to provide constient and long term results from deep learning in SAR.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121697659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Change detection of SAR images based on supervised contractive autoencoders and fuzzy clustering 基于监督压缩自编码器和模糊聚类的SAR图像变化检测
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958819
Jie Geng, Hongyu Wang, Jianchao Fan, Xiaorui Ma
{"title":"Change detection of SAR images based on supervised contractive autoencoders and fuzzy clustering","authors":"Jie Geng, Hongyu Wang, Jianchao Fan, Xiaorui Ma","doi":"10.1109/RSIP.2017.7958819","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958819","url":null,"abstract":"In this paper, supervised contractive autoencoders (SCAEs) combined with fuzzy c-means (FCM) clustering are developed for change detection of synthetic aperture radar (SAR) images, which aim to take advantage of deep neural networks to capture changed features. Given two original SAR images, Lee filter is used in preprocessing and the difference image (DI) is obtained by log ratio method. Then, FCM is adopted to analyse DI, which yields pseudo labels for guiding the training of SCAEs. Finally, SCAEs are developed to learn changed features from bitemporal images and DI, which can obtain discriminative features and improve detection accuracies. Experiments on three data demonstrate that the proposed method outperforms some related approaches.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124855522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
3-D imaging of high-speed moving space target 高速运动空间目标的三维成像
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958797
Yu-Xue Sun, Ying Luo, Qun Zhang, Song Zhang
{"title":"3-D imaging of high-speed moving space target","authors":"Yu-Xue Sun, Ying Luo, Qun Zhang, Song Zhang","doi":"10.1109/RSIP.2017.7958797","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958797","url":null,"abstract":"The high-speed moving of space targets introduces distortion and migration to range profile, which will have a negative effect on three-dimensional (3-D) imaging of targets. In this paper, based on parametric sparse representation, a 3-D imaging method for high-speed moving space target is proposed. First, the impact of high speed on range profile is analyzed. Then, based on L-shaped three-antenna interferometric system, a dynamic joint parametric sparse representation model of echoes from three antennas is established. The dictionary matrix is refined by iterative estimation of velocity. Finally, interferometric processing is conducted to obtain the 3-D image of target scatterers. The simulation results verify the effectiveness of the proposed method.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129559860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hyperspectral image classification based on spectral-spatial feature extraction 基于光谱空间特征提取的高光谱图像分类
2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP) Pub Date : 2017-05-18 DOI: 10.1109/RSIP.2017.7958808
Zhen Ye, Li-ling Tan, Lin Bai
{"title":"Hyperspectral image classification based on spectral-spatial feature extraction","authors":"Zhen Ye, Li-ling Tan, Lin Bai","doi":"10.1109/RSIP.2017.7958808","DOIUrl":"https://doi.org/10.1109/RSIP.2017.7958808","url":null,"abstract":"A novel hyperspectral classification algorithm based on spectral-spatial feature extraction is proposed. First, spectral-spatial features are extracted by Gabor transform in PCA-projected space. Following that, Gabor-feature bands are partitioned into multiple subsets. Afterwards, the adjacent features in each subset are fused. Finally, the fused features are processed by recursive filtering before feeding into support vector machine (SVM) classifier. Experimental results demonstrate that the proposed algorithm substantially outperforms the traditional and state-of-the-art methods.","PeriodicalId":262222,"journal":{"name":"2017 International Workshop on Remote Sensing with Intelligent Processing (RSIP)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121106730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信