{"title":"Reflectance spectroscopy and ASTER mapping of aeolian dunes of Shaqra and Tharmada Provinces, Saudi Arabia: Field validation and laboratory confirmation","authors":"Yousef Salem, H. Ghrefat, R. Sankaran","doi":"10.1080/19479832.2022.2069160","DOIUrl":"https://doi.org/10.1080/19479832.2022.2069160","url":null,"abstract":"ABSTRACT Spatial variability of grain sizes and mapping of aeolian dunes is important to study the sand erosion, transport, and dune movement and to understand the dune encroachment and land degradation. This study examines the grain size statistical parameters and mineralogical composition of 68 sand samples collected from 17 crescentic dunes and assesses the source and depositional environment of these dunes. The analyses of samples for grain sizes resulted that the sands are characteristics to fine with an average size of 2.28 Φ and classified as moderately well-sorted (0.59 Φ), mesokurtic (0.97 Φ), and fine to coarsely skewed (0.14 Φ). X-Ray Diffraction shows that the dunes are deposited mainly by quartz, calcite, and haematite. The occurrence of absorption features near 0.5, 0.9, and 2.22 μm confirm the presence of such iron and aluminosilicate minerals in the dunes. The dunes of the provinces were mapped using TIR bands of ASTER satellite data by Carbonate index (CI) and Quartz index (QI). A good agreement among the results of grain size analyses, spectral measurements, mineralogical studies, and mapping of dunes with the field observations suggests that the sand deposits in the study area have a diversity of sources in the aeolian environment.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44619918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance analysis of parameter estimator on non-linear iterative methods for ultra-wideband positioning","authors":"Chuanyang Wang, Bing He, Liangliang Shi, Weiduo Huang, Liuxu Shan","doi":"10.1080/19479832.2022.2064554","DOIUrl":"https://doi.org/10.1080/19479832.2022.2064554","url":null,"abstract":"ABSTRACT Ultra-wideband is a promising technology in indoor positioning due to its accurate time resolution and good penetration. Since the positioning model is non-linear, iterative methods are often considered for solving the localisation problem. However, the positioning system is prone to become ill-posed. The iterative methods cannot easily converge to a global optimal solution. In this paper, the convergence property of four non-linear iterative methods is analytically reviewed under ill-conditioned configuration. For the iteration, three types of initial values are selected. Experimental results are given to demonstrate that although the barycentre method can converge correctly, it is inefficient with too many iterations. In addition, with a good initial value, the Gauss–Newton method can converge effectively, and it sometimes converges to a false local optimisation solution when selecting a bad initial value. Moreover, both the regularised Gauss–Newton method and closed-form Newton method can converge to the global optimum effectively with fewer iterations. This study shows that the closed-form Newton method has higher efficiency of convergence than the other methods. Meanwhile, to make complete use of measurements available to improve the accuracy, the result of non-iterative method is generally used as the initial value of the iterative method.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43498204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimating Leaf Area Index and biomass of sugarcane based on Gaussian process regression using Landsat 8 and Sentinel 1A observations","authors":"Gebeyehu Abebe, T. Tadesse, B. Gessesse","doi":"10.1080/19479832.2022.2055157","DOIUrl":"https://doi.org/10.1080/19479832.2022.2055157","url":null,"abstract":"ABSTRACT Accurate estimation of crop parameters, such as Leaf Area Index (LAI) and biomass over large areas using remote sensing techniques, is crucial for monitoring crop growth and yield prediction. In this study, a Gaussian Process Regression (GPR) method was developed to estimate LAI and biomass values of sugarcane during growth season using optical and synthetic-Aperture Radar (SAR) data fusion. Predicting LAI on an independent test data set using the GPR and the combined optical and SAR indices provided better prediction accuracies of LAI; with the GPR based on radial basis function (Root Mean Square Error [RMSE] = 0.34, Mean Absolute Error [MAE] = 0.28 and Mean Absolute Percentage Error [MAPE] = 10.5%) and polynomial function (RMSE = 0.42, MAE = 0.31 and MAPE = 12.58%), respectively. The test results of sugarcane biomass also showed that the GPR (poly) produced the highest statistical results (RMSE = 2.45 kg/m2, MAE = 1.72 kg/m2, MAPE = 8.1%) using the combined indices. The results suggest that the crop biophysical retrieval based on optical and SAR data fusion and GPR proposed in this study could improve LAI and biomass estimation that could help for effective crop growth monitoring and mapping applications.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42895966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A segment-based filtering method for mobile laser scanning point cloud","authors":"Xiangguo Lin, W. Xie","doi":"10.1080/19479832.2022.2047801","DOIUrl":"https://doi.org/10.1080/19479832.2022.2047801","url":null,"abstract":"ABSTRACT In most Mobile Laser Scanning (MLS) applications, filtering is a necessary step. In this paper, a segmentation-based filtering method is proposed for MLS point cloud, where a segment rather than an individual point is the basic processing unit. In particular, the MLS point clouds in some blocks are clustered into segments by a surface growing algorithm, and then the object segments are detected and removed. A segment-based filtering method is employed to detect the ground segments. The experiment in this paper uses two MLS point cloud datasets to evaluate the proposed method. Experiments indicate that, compared with the classic progressive TIN (Triangulated Irregular Network) densification algorithm, the proposed method is capable of reducing the omission error, the commission error and total error by 3.62%, 7.87% and 5.54% on average, respectively.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43575013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ya-Guang Tian, Yuan-Wei Chen, Wan Diming, Yuan Shaoguang, Mao Wandeng, Wang Chao, Chun-xiao Xu, Yifan Long
{"title":"Augmentation Method for anti-vibration hammer on power transimission line based on CycleGAN","authors":"Ya-Guang Tian, Yuan-Wei Chen, Wan Diming, Yuan Shaoguang, Mao Wandeng, Wang Chao, Chun-xiao Xu, Yifan Long","doi":"10.1080/19479832.2022.2033855","DOIUrl":"https://doi.org/10.1080/19479832.2022.2033855","url":null,"abstract":"ABSTRACT Checking the status of the power grid is very important. However, the low occurrence of defects in an actual power grid makes it difficult to collect training samples, which affects the training of defect-detection models. In this study, we proposed a method for enhancing the defective image of a power grid based on cycle-consistent adversarial networks (CycleGAN). The defective image sample dataset was expanded by fusing artificial defective samples, converted from defect-free components of samples with the trained CycleGAN model and updating its corresponding label file. Comparing the accuracy of the object detection model trained by the augmented dataset, we found a 2%–3% Average Precision (AP) improvement over baseline, and the fusing method of histogram specification reaches the best performance. In conclusion, the generative adversarial network (GAN) and its variants have considerable potential for dataset augmentation as well as scope for further improvement.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49581576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bikash Meher, S. Agrawal, Rutuparna Panda, A. Abraham
{"title":"A region based remote sensing image fusion using anisotropic diffusion process","authors":"Bikash Meher, S. Agrawal, Rutuparna Panda, A. Abraham","doi":"10.1080/19479832.2021.2019132","DOIUrl":"https://doi.org/10.1080/19479832.2021.2019132","url":null,"abstract":"ABSTRACT The aim of remote sensing image fusion is to merge the high spectral resolution multispectral (MS) image with high spatial resolution panchromatic (PAN) image to get a high spatial resolution MS image with less spectral distortion. The conventional pixel level fusion techniques suffer from the halo effect and gradient reversal. To solve this problem, a new region-based method using anisotropic diffusion (AD) for remote sensing image fusion is investigated. The basic idea is to fuse the ‘Y’ component only (of YCbCr colour space) of the MS image with the PAN image. The base layers and detail layers of the input images obtained using the AD process are segmented using the fuzzy c-means (FCM) algorithm and combined based on their spatial frequency. The fusion experiment uses three data sets. The contributions of this paper are as follows: i) it solves the chromaticity loss problem at the time of fusion, ii) the AD filter with the region-based fusion approach is brought into the context of remote sensing application for the first time, and iii) the edge info in the input images is retained. A qualitative and quantitative comparison is made with classic and recent state-of-the-art methods. The experimental results reveal that the proposed method produces promising fusion results.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46689838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Fusion and classification of multi-temporal SAR and optical imagery using convolutional neural network","authors":"Achala Shakya, M. Biswas, M. Pal","doi":"10.1080/19479832.2021.2019133","DOIUrl":"https://doi.org/10.1080/19479832.2021.2019133","url":null,"abstract":"ABSTRACT Remote sensing image classification is difficult, especially for agricultural crops with identical phenological growth periods. In this context, multi-sensor image fusion allows a comprehensive representation of biophysical and structural information. Recently, Convolutional Neural Network (CNN)-based methods are used for several applications due to their spatial-spectral interpretability. Hence, this study explores the potential of fused multi-temporal Sentinel 1 (S1) and Sentinel 2 (S2) images for Land Use/Land Cover classification over an agricultural area in India. For classification, Bayesian optimised 2D CNN-based DL and pixel-based SVM classifiers were used. For fusion, a CNN-based siamese network with Ratio-of-Laplacian pyramid method was used for the images acquired over the entire winter cropping period. This fusion strategy leads to better interpretability of results and also found that 2D CNN-based DL classifier performed well in terms of classification accuracy for both single-month (95.14% and 96.11%) as well as multi-temporal (99.87% and 99.91%) fusion in comparison to the SVM with classification accuracy for single-month (80.02% and 81.36%) and multi-temporal fusion (95.69% and 95.84%). Results indicate better performance by Vertical-Vertical polarised fused images than Vertical-Horizontal polarised fused images. Thus, implying the need to analyse classified images obtained by DL classifiers along with the classification accuracy.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44768512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-stage guided-filter for SAR and optical satellites images fusion using Curvelet and Gram Schmidt transforms for maritime surveillance","authors":"T. Ghoniemy, M. Hammad, A. Amein, T. Mahmoud","doi":"10.1080/19479832.2021.2003446","DOIUrl":"https://doi.org/10.1080/19479832.2021.2003446","url":null,"abstract":"ABSTRACT Synthetic aperture radar (SAR) images depend on the dielectric properties of objects with certain incident angles. Thus, vessels and other metallic objects appear clear in SAR images however, they are difficult to be distinguished in optical images. Synergy of these two types of images leads to not only high spatial and spectral resolutions but also good explanation of the image scene. In this paper, a hybrid pixel-level image fusion method is proposed for integrating panchromatic (PAN), multispectral (MS) and SAR images. The fusion method is performed using Multi-stage guided filter (MGF) for optical images pansharpening, to get high preserving spatial details and nested Gram-Schmidt (GS) and Curvelet-Transform (CVT) methods for SAR and optical images,to increase the quality of the final fused image and benefit from the SAR image properties. The accuracy and performance of the proposed method are appraised using Landsat-8 Operational-Land-Imager (OLI) and Sentinel-1 images subjectively as well as objectively using different quality metrics. Moreover, the proposed method is compared to a number of state-of-the-art fusion techniques. The results show significant improvements in both visual quality and the spatial and spectral evaluation metrics. Consequently, the proposed method is capable of highlighting maritime activity for further processing.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43470804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Spectral-spatial classification fusion for hyperspectral images in the probabilistic framework via arithmetic optimization Algorithm","authors":"Reza Seifi Majdar, H. Ghassemian","doi":"10.1080/19479832.2021.2001051","DOIUrl":"https://doi.org/10.1080/19479832.2021.2001051","url":null,"abstract":"ABSTRACT Spectral data and spatial information such as shape and texture features can be fused to improve the classification of the hyperspectral images. In this paper, a novel approach of the spectral and spatial features (texture features and shape features) fusion in the probabilistic framework is proposed. The Gabor filters are applied to obtain the texture features and the morphological profiles (MPs) are used to obtain the shape features. These features are classified separately by the support vector machine (SVM); therefore, the per-pixel probabilities can be estimated. A novel meta-heuristic optimization method called Arithmetic Optimization Algorithm (AOA) is used to weighted combinations of these probabilities. Three parameters, α, β and γ determine the weight of each feature in the combination. The optimal value of these parameters is calculated by AOA. The proposed method is evaluated on three useful hyperspectral data sets: Indian Pines, Pavia University and Salinas. The experimental results demonstrate the effectiveness of the proposed combination in hyperspectral image classification, particularly with few labelled samples. As well as, this method is more accurate than a number of new spectral-spatial classification methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45322277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jiping Liu, M. Konečný, Qingyun Du, Shenghua Xu, F. Ren, Xianghong Che
{"title":"The latest progress of data fusion for integrated disaster reduction intelligence service","authors":"Jiping Liu, M. Konečný, Qingyun Du, Shenghua Xu, F. Ren, Xianghong Che","doi":"10.1080/19479832.2021.1970931","DOIUrl":"https://doi.org/10.1080/19479832.2021.1970931","url":null,"abstract":"Looking back over the past decade, superstorms, wildfires, floods, geological hazards, and monster earthquakes have taken unimaginable tolls all over the planet. In 2020, nearly 138 million people suffered from various natural disasters throughout China, where 591 people died and disappeared, and 5.89 million people were relocated for emergency resettlement. This led to direct economic losses of 370.15 billion CNY. With the advances of data acquisition technologies, i.e. remote sensing and Internet of Things, disasterrelated data can be collected rapidly and easily. However, disaster-related data vary in the acquiring methodology and, as such, vary in geographic scope and resolution; thus, how to fuse various disaster-related data is of significance for emergency disaster reduction (Liu et al. 2020). Disaster-related data are essential in understanding the impacts and costs of disasters, and data fusion plays an essential role in disaster prediction, reduction, assessment, and intelligent services. Using multisource data can improve the information availability and quality derived at various levels (Liu et al. 2018, Liu et al. 2020). Especially, for the emergency response, it is particularly imperative to integrate multisource data to provide the latest, accurate and timely information with various scales for disaster reduction services. For example, a large-scale landslide occurred in the Jinsha River Basin at the border of Sichuan and Tibet on 10 October 2018 and formed the barrier lake, which posed a great threat to the lives and property of people in the downstream Jinsha River region (Qiu et al. 2017, Li et al. 2020a). Using disaster multisource data fusion (Gamba 2014), spatiotemporal process simulation (Wang et al. 2020), visual analysis and risk assessment (Li et al. 2020), and disaster information intelligent service, decision-making information were generated to support disaster emergency management (Liu et al. 2018). This special issue on Data Fusion for Integrated Disaster Reduction Intelligence Service focuses on the latest theoretical and technical issues related to disaster-related data fusion, which aims at clarifying the current research progress to provide an opportunity to learn and communicate with each other in this field. This special issue is supported by the National Key Research and Development Program of China under Grant No. 2016YFC0803101 and includes six articles spanning various topics. Specifically, an improved frequency domain integration approach is proposed by combining GNSS and Accelerometers using GNSS to gain an accurate initial position to reconstruct dynamic displacements. An online emergency mapping framework based on a disaster scenario model is introduced for mapping, knowledge rules, mapping templates, map symbol engines, and a simple wizard to shorten the mapping cycle in emergencies. A suitability visualisation method is realised for flood fusion 3D scene guided by disaster information through the fusi","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2021-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47516818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}