2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)最新文献

筛选
英文 中文
Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency 基于多尺度相位一致性的多模态遥感图像配准
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486287
Song Cui, Yanfei Zhong
{"title":"Multi-Modal Remote Sensing Image Registration Based on Multi-Scale Phase Congruency","authors":"Song Cui, Yanfei Zhong","doi":"10.1109/PRRS.2018.8486287","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486287","url":null,"abstract":"Automatic matching of multi-modal remote sensing images remains a challenging task in remote sensing image analysis due to significant non-linear radiometric differences between these images. This paper introduces the phase congruency model with illumination and contrast invariance for image matching, and extends the model to a novel image registration method, named as multi-scale phase consistency (MS-PC). The Euclidean distance between MS-PC descriptors is used as similarity metric to achieve correspondences. The proposed method is evaluated with four pairs of multi-model remote sensing images. The experimental results show that MS-PC is more robust to the radiation differences between images, and performs better than two popular method (i.e. SIFT and SAR-SIFT) in both registration accuracy and tie points number.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Reconstructing Lattices from Permanent Scatterers on Facades 从立面上的永久散射体重建网格
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486322
E. Michaelsen, U. Soergel
{"title":"Reconstructing Lattices from Permanent Scatterers on Facades","authors":"E. Michaelsen, U. Soergel","doi":"10.1109/PRRS.2018.8486322","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486322","url":null,"abstract":"In man-made structures regularities and repetitions prevails. In particular in building facades lattices are common in which windows and other elements are repeated as well in vertical columns as in horizontal rows. In very-high-resolution space-borne radar images such lattices appear saliently. Even untrained arbitrary subjects see the structure instantaneously. However, automatic perceptual grouping is rarely attempted. This contribution applies a new lattice grouping method to such data. Utilization of knowledge about the particular mapping process of such radar data is distinguished from the use of Gestalt laws. The latter are universally applicable to all kinds of pictorial data. An example with so called permanent scatterers in the city of Berlin shows what can be achieved with automatic perceptual grouping alone, and what can be gained using domain knowledge. Keywords- perceptual grouping, SAR, permanent scatterers, façade recognition","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129131599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach 基于对象方法的VGG-16网络树种检测
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486395
M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong
{"title":"Using a VGG-16 Network for Individual Tree Species Detection with an Object-Based Approach","authors":"M. Rezaee, Yun Zhang, Rakesh K. Mishra, Fei Tong, Hengjian Tong","doi":"10.1109/PRRS.2018.8486395","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486395","url":null,"abstract":"Acquiring information about forest stands such as individual tree species is crucial for monitoring forests. To date, such information is assessed by human interpreters using airborne or an Unmanned Aerial Vehicle (UAV), which is time/cost consuming. The recent advancement in remote sensing image acquisition, such as WorldView-3, has increased the spatial resolution up to 30 cm and spectral resolution up to 16 bands. This advancement has significantly increased the potential for Individual Tree Species Detection (ITSD). In order to use the single source Worldview-3 images, our proposed method first segments the image to delineate trees, and then detects trees using a VGG-16 network. We developed a pipeline for feeding the deep CNN network using the information from all the 8 visible-near infrareds' bands and trained it. The result is compared with two state-of-the-art ensemble classifiers namely Random Forest (RF) and Gradient Boosting (GB). Results demonstrate that the VGG-16 outperforms all the other methods reaching an accuracy of about 92.13%.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126826878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN 使用无监督图像对图像CNN的高光谱和激光雷达数据协同分类
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486164
Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li
{"title":"Collaborative Classification of Hyperspectral and LIDAR Data Using Unsupervised Image-to-Image CNN","authors":"Mengmeng Zhang, Wei Li, Xueling Wei, Xiang Li","doi":"10.1109/PRRS.2018.8486164","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486164","url":null,"abstract":"Currently, how to efficiently exploit useful information from multi-source remote sensing data for better Earth observation becomes an interesting but challenging problem. In this paper, we propose an collaborative classification framework for hyperspectral image (HSI) and Light Detection and Ranging (LIDAR) data via image-to-image convolutional neural network (CNN). There is an image-to-image mapping, learning a representation from input source (i.e., HSI) to output source (i.e., LIDAR). Then, the extracted features are expected to own characteristics of both HSI and LIDAR data, and the collaborative classification is implemented by integrating hidden layers of the deep CNN. Experimental results on two real remote sensing data sets demonstrate the effectiveness of the proposed framework.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125191331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine 基于灰色- s型核函数支持向量机的无人机图像分类方法
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486193
Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo
{"title":"The UAV Image Classification Method Based on the Grey-Sigmoid Kernel Function Support Vector Machine","authors":"Pel Pengcheng, Shi Yue, Wan ChengBo, Ma Xinming, Guo Wa, Qiao Rongbo","doi":"10.1109/PRRS.2018.8486193","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486193","url":null,"abstract":"Since SVM is sensitive to the noises and outliers in the training set, a new SVM algorithm based on affinity Grey-Sigmoid kernel is proposed in the paper. The cluster membership is defined by the distance from the cluster center, but also defined by the affinity among samples. The affinity among samples is measured by the minimum super sphere which containing the maximum of the samples. Then the Grey degree of samples are defined by their position in the super sphere. Compared with the SVM based on traditional Sigmoid kernel, experimental results show that the Grey-Sigmoid kernel is more robust and efficient.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"57 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114050358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification 基于多尺度像素和目标特征的深度学习高光谱图像分类
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486304
Meng Zhang, L. Hong
{"title":"Deep Learning Integrated with Multiscale Pixel and Object Features for Hyperspectral Image Classification","authors":"Meng Zhang, L. Hong","doi":"10.1109/PRRS.2018.8486304","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486304","url":null,"abstract":"The spectral resolution and spatial resolution of hyperspectral images are continuously improving, providing rich information for interpreting remote sensing image. How to improve the image classification accuracy has become the focus of many studies. Recently, Deep learning is capable to extract discriminating high-level abstract features for image classification task, and some interesting results have been acquired in image processing. However, when deep learning is applied to the classification of hyperspectral remote sensing images, the spectral-based classification method is short of spatial and scale information; the image patch-based classification method ignores the rich spectral information provided by hyperspectral images. In this study, a multi-scale feature fusion hyperspectral image classification method based on deep learning was proposed. Firstly, multiscale features were obtained by multi-scale segmentation. Then multiscale features were input into the convolution neural network to extract high-level features. Finally, the high-level features were used for classification. Experimental results show that the classification results of the fusion multi-scale features are better than the single-scale features and regional feature classification results.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133976399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Fine Registration of Mobile and Airborne LiDAR Data Based on Common Ground Points 基于公共地面点的移动和机载激光雷达数据精细配准
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486181
Yanming Chen, Xiaoqiang Liu, Mengru Yao, Liang Cheng, Manchun Li
{"title":"Fine Registration of Mobile and Airborne LiDAR Data Based on Common Ground Points","authors":"Yanming Chen, Xiaoqiang Liu, Mengru Yao, Liang Cheng, Manchun Li","doi":"10.1109/PRRS.2018.8486181","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486181","url":null,"abstract":"Light Detection and Ranging (LiDAR), as an active remote sensing technology, can be mounted on satellite, aircraft, vehicle, tripod and other platforms to acquire three-dimensional information of the earth surface efficiently. However, it is difficult to obtain omnidirectional three-dimensional information of the earth surface using a LiDAR system from a single platform. So the integration of multi-platform LiDAR data, in which data registration is a core part, has become an important topic in geospatial information processing. In this paper, the iterative closest common ground points registration method is proposed. Firstly, the possible common ground points of mobile and airborne LiDAR data are extracted. And then the adaptive octree structure is utilized to thin the LiDAR ground points, which make mobile and airborne LiDAR ground points have the same point density. Finally, the fine registration parameters are calculated by the iterative closest point (ICP) method, in which the thinned ground points from two sources are input data. The innovation of this method is that the common ground points and adaptive octree structure are used to optimize the input data of iterative closest point, which overcomes the registration difficulty caused by different perspectives and resolutions of mobile and airborne LiDAR. The proposed method was tested in this paper and can effectively realize the fine registration of mobile and airborne LiDAR data and make the façade points acquired by mobile LiDAR and the roof points acquired by airborne LiDAR fitter.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114220181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features 基于颜色特征的钻孔数字光学图像与探地雷达的土层自动识别
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486325
L. Li, C. Yu, T. Sun, Z. Han, X. Tang
{"title":"Automatic Identification of Soil Layer from Borehole Digital Optical Image and GPR Based on Color Features","authors":"L. Li, C. Yu, T. Sun, Z. Han, X. Tang","doi":"10.1109/PRRS.2018.8486325","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486325","url":null,"abstract":"For the high-resolution borehole image obtained by digital panoramic borehole camera system, a method for recognizing soil layer based on color features is proposed. Due to the obvious difference in color between soil layer and common rock layer, a soil layer detection model based on HSV color space is established. The binarized image of soil layer is obtained by using this model. Secondly, the binary image is filtered to depress the noise effects. Then, the binarized image of the soil layer is divided and the density of pixels in each segmentation is calculated to determine the depth, area and direction of the soil layer, so that the identification of soil layer in the digital borehole image can be achieved. Through verifying this method with many actual borehole images and comparing them with the corresponding borehole radar images, the result illustrate that this method can identify all of the soil layer throughout the whole borehole digital optical image automatically and quickly. It provides a new reliable method for the automatic identification of borehole structural planes in engineering application.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115272104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images 基于disnet的遥感影像行星景深立体匹配
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486195
Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li
{"title":"DispNet Based Stereo Matching for Planetary Scene Depth Estimation Using Remote Sensing Images","authors":"Qingling Jia, Xue Wan, Baoqin Hei, Shengyang Li","doi":"10.1109/PRRS.2018.8486195","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486195","url":null,"abstract":"Recent work has shown that convolutional neural network can solve the stereo matching problems in artificial scene successfully, such as buildings, roads and so on. However, whether it is suitable for remote sensing stereo image matching in featureless area, for example lunar surface, is uncertain. This paper exploits the ability of DispNet, an end-to-end disparity estimation algorithm based on convolutional neural network, for image matching in featureless lunar surface areas. Experiments using image pairs from NASA Polar Stereo Dataset demonstrate that DispNet has superior performance in the aspects of matching accuracy, the continuity of disparity and speed compared to three traditional stereo matching methods, SGM, BM and SAD. Thus it has the potential for the application in future planetary exploration tasks such as visual odometry for rover navigation and image matching for precise landing","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124693711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Encoding-Based Back Projection Algorithm for Underground Holes Detection via Ground Penetrating Radar 一种基于编码的探地雷达地下孔探测反投影算法
2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS) Pub Date : 2018-08-01 DOI: 10.1109/PRRS.2018.8486182
Shaokun Zhang, Zhiyou Hong, Yiping Chen, Zejian Kang, Zhipeng Luo, Jonathan Li
{"title":"An Encoding-Based Back Projection Algorithm for Underground Holes Detection via Ground Penetrating Radar","authors":"Shaokun Zhang, Zhiyou Hong, Yiping Chen, Zejian Kang, Zhipeng Luo, Jonathan Li","doi":"10.1109/PRRS.2018.8486182","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486182","url":null,"abstract":"As underground cavities can cause ground collapse, which will make serious threat to people's safety and property. It is of great significance to implement underground cavity inspection on urban streets and roads subgrade. In the practical application of engineering, the ground penetrating radar (GPR) has shown promising for detection of underground cavities. In this paper, we propose a novel encoding-based back projection (EBP) algorithm to detect underground holes. Our proposed method has a natural filtering function and avoids the effect of trailing, which makes the target localization more accurate. The experiments use the simulation data derived from the GPR numerical simulation software (GprMax) and the measured data collected from the Latvia radar system. And the results demonstrate that the proposed method has superior performance.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129917713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信