Lai Guangling, Z. Yongsheng, Tong Xiaochong, Li Kai, Ding Lu
{"title":"Research on Automatic Generation and Data Organization Method of Control Points","authors":"Lai Guangling, Z. Yongsheng, Tong Xiaochong, Li Kai, Ding Lu","doi":"10.1109/PRRS.2018.8486252","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486252","url":null,"abstract":"High precision control points are indispensable for the improvement of geometric positioning accuracy of aerial and space images. At present, most control points need to be installed manually, and which obtained in this way are fixed to a specific area and have high installation and maintenance cost. Satellites can only correct their orbit and attitude in real time when they pass through the area with control points. Therefore, setting up control points by this way has poor flexibility and is not conducive to the improvement of satellite positioning accuracy. In order to solve this problem, an automatic control point generation algorithm based on natural ground object automatic recognition and detection is proposed. First, typical ground objects such as playground and road intersection are automatically identified by YOLO algorithm, and feature extraction is carried out by classic SIFT feature extraction operator on the basis of recognition. Then, the feature extraction results, along with the target attribute, location and other information are stored in the agreed format. Finally, the data of control points are organized by the multi-scale integer coding method based on quadruplication to improve the efficiency of data storage and access. This method can make full use of high precision surveying and mapping satellite image data and set up control points around the world. Satellites can correct their orbit and attitude at any time according to their needs, and can greatly improve the positioning accuracy of images.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133833920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Method of Building Extraction Using Object Based Analysis of High Resolution Remote Sensing Images","authors":"Wang Yan, Tao Zui, Lyu Fenghua","doi":"10.1109/PRRS.2018.8486404","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486404","url":null,"abstract":"More high spatial resolution remote sensing images can be available today. High resolution means more details on the images and it gives a chance to find the spatial relationship among ground objects. Aiming at extracting building from high resolution remote sensing images, this paper proposed a method based on Geographic Object-Based Image Analysis (GEOBIA), using the relationship among shadow, greenland, buildings and the building its own characteristics to try to extract all the buildings on the high resolution remote sensing images. The experiment chose the ISPRS sample images as study area and the result has proved the validity of the method.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130517254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Elegant End-to-End Fully Convolutional Network (E3FCN) for Green Tide Detection Using MODIS Data","authors":"Haoyu Yin, Yingjian Liu, Qiang Chen","doi":"10.1109/PRRS.2018.8486160","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486160","url":null,"abstract":"Using remote sensing (RS) data to monitor the onset, proliferation and decline of green tide (GT) has great significance for disaster warning, trend prediction and decision-making support. However, remote sensing images vary under different observing conditions, which bring big challenges to detection missions. This paper proposes an accurate green tide detection method based on an Elegant End-to-End Fully Convolutional Network (E3FCN) using Moderate Resolution Imaging Spectroradiometer (MODIS) data. In preprocessing, RS images are firstly separated into subimages by a sliding window. To detect GT pixels more efficiently, the original Fully Convolutional Neural Network (FCN) architecture is modified into E3FCN, which can be trained end-to-end. The E3FCN model can be divided into two parts, contracting path and expanding path. The contracting path aims to extract high-level features and the expanding path aims to provide a pixel-level prediction by using a skip technique. The prediction result of whole image is generated by merging the prediction results of subimages, which can also improve the final performance. Experiment results show that the average precision of E3FCN on the whole data sets is 98.06%, compared to 73.27% of Support Vector Regression (SVR), 71.75% of Normalized Difference Vegetation Index (NDVI), and 64.41% of Enhanced Vegetation Index (EVI).","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133490425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhang Feng, Yang Xuying, Sun Xiaoxiao, Du Zhenhong, L. Renyi
{"title":"Developing Process Detection of Red Tide Based on Multi-Temporal GOCI Images","authors":"Zhang Feng, Yang Xuying, Sun Xiaoxiao, Du Zhenhong, L. Renyi","doi":"10.1109/PRRS.2018.8486244","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486244","url":null,"abstract":"Red tide, as one of the major marine disasters in the coastal waters, has a significant temporal and spatial characteristics and pattern. A new understanding of red tides evolution can be used to make early predictions for emergency decision-making of red tides. The geostationary ocean color imager (GOCI) with a high space coverage and temporal resolution can fully meet the monitoring needs of the rapidly changing red tide. In this paper, we analyzed the spectral characteristics of red tide water, high turbid water and clean water based on GOCI imagery and proposed a red tide extraction index RrcH by combining the fluorescence line height (FLH). The comparison with buoy monitoring data validated the accuracy and reliability of the RrcH algorithm. The cases show that the formation of the red tides in a highly turbid water environment can be detected and monitored by using GOCI, which is beneficial to disaster prevention and reduction.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125231962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junyu Chen, Jiahang Liu, Chenghu Zhou, F. Zhu, Tieqiao Chen, Hang Zhang
{"title":"An Automatic Image Enhancement Method Based on the Improved HCTLS","authors":"Junyu Chen, Jiahang Liu, Chenghu Zhou, F. Zhu, Tieqiao Chen, Hang Zhang","doi":"10.1109/PRRS.2018.8486212","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486212","url":null,"abstract":"Remote sensing images often suffer low contrast, and the efficiency and robustness of contrast enhancement for remote sensing images is still a challenge. To meet with the requirements of applications, Liu et al recently proposed a self-adaptive contrast enhancement method (HCTLS) based on the histogram compacting transform (HCT). In this method, some gray levels on which the frequency is smaller than a certain reference, will merged into their adjacent levels for a compact level distribution. However, if the merged levels whose corresponding pixels in some connected regions, local contrast of these connected regions will decrease, even disappear. In this paper, an improved enhancement method (DPHCT) for remote sensing image based on the HCTLS is presented for preserving more the local detail and contrast. Firstly, extracting the connected regions from the enhanced result by HCT where the local contrast is decreased or disappeared. These connected regions are decomposed into the inner regions and the boundary regions adaptively. Then, construct pixel values by using the unified brightness function to maintain the contrast for the connected regions inside. At the same time eliminate stitching lines by using a weighted fusion spliced algorithm to eliminate the problem of borders outstanding in result of intensity roughness. Finally, the image is normalized into [0, 255] by linear stretch. Experimental results indicate that the proposed algorithm not only can enhance the global contrast but also can preserve local contrast and details.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122099348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Method of Interactively Extracting Region Objects from High-Resolution Remote Sensing Image Based on Full Connection CRF","authors":"Zhang Chun-sen, Yu Zhen, Hu Yan","doi":"10.1109/PRRS.2018.8486175","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486175","url":null,"abstract":"Aiming at the region objects of high resolution remote sensing images, this paper proposes an interactive region objects extraction method for high-resolution remote sensing images based on fully connected conditional random fields. This method estimates the foreground model by artificial interaction markers. On the basis of using the SLIC algorithm to over segment the input images, combining the color and texture features, the region-based maximum similarity fusion (MSRM) is used to expand the foreground region and establish the global information of the full-connection conditional random field description image. Then, based on the mean-field estimation, the model inference is realized by the high-dimensional Gauss filtering method, and then the contour of the area features is obtained. The experimental results show that the method is effective by extracting the area features such as waters, woodlands, terraces and bare lands on high resolution remote sensing images.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117117093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-Branch Regression Network For Building Classification Using Remote Sensing Images","authors":"Yuanyuan Gui, Xiang Li, Wei Li, Anzhi Yue","doi":"10.1109/PRRS.2018.8486177","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486177","url":null,"abstract":"Convolutional neural networks (CNN) are widely used for processing high-resolution remote sensing images like segmentation or classification, and have been demonstrated excellent performance in recent years. In this paper, a novel classification framework based on segmentation method, called Multi-branch regression network (named as MBR-Net) is proposed. The proposed method can generate multiple losses rely on training images in different size of information. In addition, a complete training strategy for classifying remote sensing images, which can reduce the influence of uneven samples is also developed. Experimental results with Inrial aerial dataset demonstrate that the proposed framework can provide much better results compared to state-of-the-art U-Net and generate fine-grained prediction maps.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121586127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High-precision Centroid Extraction and PSF Calculation on Remote Sensing Image of Point Source Array","authors":"Li Kai, Z. Yongsheng, Z. Zhenchao, Xu Lin","doi":"10.1109/PRRS.2018.8486240","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486240","url":null,"abstract":"The high-precision measurement of remote sensing image geometric and radiometric information is an important basis for remote sensing image geometric and radiometric processing. Based on the theory of image degradation, this paper describes the method of obtaining simulated degradation image of point source array using prior information. Then the shortcomings of traditional Point Spread Function (PSF) parameter solving methods are analyzed, and a new algorithm for PSF parameter solving is proposed on this basis. Experimental results show that the accuracy of geometric center of point source and full-width half-maximum width (FWHM) of PSF obtained by the proposed method from simulated degradation image are better than the traditional algorithms. When the SNR is 40dB, the RMSE of the geometrical position of the point source obtained by proposed algorithm is only 0.01 pixels; the RMSE of FWHM of PSF is only 0.03 pixels. Experimental results further show that the use of the multiphase point source arrays can effectively improve the accuracy of PSF parameter. This paper demonstrates that point source can provide both high precision geometry and radiation information for remote sensing images, and will potentially be an ideal tool for joint geometric and radiometric calibration.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131916184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ship Detection by Modified RetinaNet","authors":"Yingying Wang, Wei Li, Xiang Li, Xu Sun","doi":"10.1109/PRRS.2018.8486308","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486308","url":null,"abstract":"Ship detection in optical remote sensing imagery has been a hot topic in recent years and achieved promising performance. However, there are still several problems in detecting ships with various sizes. The key objective of all scales precise positioning is to obtain a high resolution feature map while having a high semantic characteristic information. Based on this idea, a modified RetinaNet (M-RetinaNet) is proposed to build dense connections between shallow and deep feature maps, which aims at solving problems resulting from different sizes of ships. It consists of a baseline residual network and a modified multi-scale network. The modified multi-scale network includes a top-down pathway and a bottom-up pathway, both of which build on the multi-scale base network. The benefits of this model are two folds: first, it can generate feature maps with high semantic information at each layer by introducing dense lateral connections from deep to shallow; second, it maintains high spatial resolution in deep layers. Comprehensive evaluations on a ship dataset and comparison with several state-of-the-art approaches demonstrate the effectiveness of the proposed network.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126147406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wang Jianmei, Wang Zhong, Zhang Shaoming, F. Tiantian, Dong Jihui
{"title":"SAR Image Matching Area Selection Based on Actual Flight Real-Time Image","authors":"Wang Jianmei, Wang Zhong, Zhang Shaoming, F. Tiantian, Dong Jihui","doi":"10.1109/PRRS.2018.8486416","DOIUrl":"https://doi.org/10.1109/PRRS.2018.8486416","url":null,"abstract":"Matching suitability analysis is a key issue of INS/SAR integrated navigation mode. The existing suitability area selection methods use the simulated real-time image to calculate the matching probability of the scene area and further label it “suitability” or “unsuitability”. If the imaging mode of the simulated image is the same as that of the real image, the suitability area selection model based on the simulated real-time image works well. Otherwise, the model is impractical. In order to address this issue, a novel method is proposed in this paper. The sample dataset is built on the actual flight real-time images, and a hybrid feature selection method based on D-Score and SVM is used to select the suitability features and build the suitability area selection model simultaneously. Experimental results show that the consistency between the prediction results of the model and the ones experts label reaches 81.92%.","PeriodicalId":197319,"journal":{"name":"2018 10th IAPR Workshop on Pattern Recognition in Remote Sensing (PRRS)","volume":"78 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123433513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}