{"title":"Whale- crow search optimisation enabled deep convolutional neural network for flood detection","authors":"M. B. Mulik, J. V., Pandurangarao N. Kulkarni","doi":"10.1080/19479832.2023.2186957","DOIUrl":"https://doi.org/10.1080/19479832.2023.2186957","url":null,"abstract":"ABSTRACT The satellite images are more attracted in the field of flood detection. For planning actions during emergencies, flood detection plays a vital role, but the major barrier is that using satellite images to detect flooded regions. For flood detection, this method innovates a model named Whale-crow search algorithm on the basis of deep convolutional neural network (W-CSA DCNN) approach. Pre-processing, classification, segmentation and feature extraction are the four steps which is included in this model. For obtaining sound and antiquity from the input image initially, the satellite imagery is given to pre-processing and then for obtaining the features on the basis of vegetation indices the pre-processed image is put through the feature extraction process. By means of Kernel Fuzzy Auto regressive (KFAR) model, the acquire features are subsequently used in the segmentation process. After obtaining the segments, it is given to the classification, which is carried out by means of DCNN and qualified excellently via the W-CSA that is the combination of the Crow Search Algorithm (CSA) and Whale optimisation algorithm (WOA). Based on the specificity, accuracy and sensitivity with values 0.982, 0.972 and 0.975, the efficiency of this process deliberates advanced performance than the existing process.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44499897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed A. Ali, F. Eltohamy, Adel Abd-Elrazek, Mohamed Hanafy
{"title":"Assessment of micro-vibrations effect on the quality of remote sensing satellites images","authors":"Mohamed A. Ali, F. Eltohamy, Adel Abd-Elrazek, Mohamed Hanafy","doi":"10.1080/19479832.2023.2167874","DOIUrl":"https://doi.org/10.1080/19479832.2023.2167874","url":null,"abstract":"ABSTRACT Recently, there is a growing interest in analysing the degrading effect of satellite micro-vibrations due to the rapid growth in satellite technologies and the urgent need to precisely extract a huge amount of information from satellite images. Different kinds of micro-vibration have a notable effect on the quality of satellite images. The main objective of this paper is to demonstrate and analyse the effect of all types of micro-vibration on the quality of images acquired by high-resolution satellites. An algorithm to simulate micro-vibrations is proposed. A very high-resolution satellite image from the Pleiades-neo satellite is selected as an example to be used in addressing the degrading effects of micro-vibrations. In this paper, the modulation transfer function (MTF) is used as a major function to model the degradation that has been conducted. Also, several quality metrics are used to quantitatively assess the degradation. The key result of this paper is the significant effect of micro-vibrations on the quality of remote sensing satellite images which is attributed to the main influential parameters. These parameters like blur diameter, vibration displacement, number of Time Delay and Integration (TDI) stages of the camera, and the ratio of the integration time to the vibration period.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44246295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Quaife, E. Pinnington, P. Marzahn, T. Kaminski, M. Vossbeck, J. Timmermans, C. Isola, B. Rommen, A. Loew
{"title":"Synergistic retrievals of leaf area index and soil moisture from Sentinel-1 and Sentinel-2","authors":"T. Quaife, E. Pinnington, P. Marzahn, T. Kaminski, M. Vossbeck, J. Timmermans, C. Isola, B. Rommen, A. Loew","doi":"10.1080/19479832.2022.2149629","DOIUrl":"https://doi.org/10.1080/19479832.2022.2149629","url":null,"abstract":"ABSTRACT Joint retrieval of vegetation status from synthetic aperture radar (SAR) and optical data holds much promise due to the complimentary of the information in the two wavelength domains. SAR penetrates the canopy and includes information about the water status of the soil and vegetation, whereas optical data contains information about the amount and health of leaves. However, due to inherent complexities of combining these data sources there has been relatively little progress in joint retrieval of information over vegetation canopies. In this study, data from Sentinel–1 and Sentinel–2 were used to invert coupled radiative transfer models to provide synergistic retrievals of leaf area index and soil moisture. Results for leaf area are excellent and enhanced by the use of both data sources (RSME is always less than and has a correlation of better than when using both together), but results for soil moisture are mixed with joint retrievals generally showing the lowest RMSE but underestimating the variability of the field data. Examples of such synergistic retrieval of plant properties from optical and SAR data using physically based radiative transfer models are uncommon in the literature, but these results highlight the potential for this approach.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43091119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Research on adaptive enhancement of robot vision image based on multi-scale filter","authors":"Qin Dong","doi":"10.1080/19479832.2022.2149630","DOIUrl":"https://doi.org/10.1080/19479832.2022.2149630","url":null,"abstract":"ABSTRACT Contrast enhancement and histogram equalisation are two image enhancement methods, which can lead to changes in the edge position of the resulting image, blurring or even loss of details. Therefore, this paper introduces a multi-scale filter to adaptively enhance the robot visual image, improve the brightness of the robot visual image, enrich the image details and reduce the image enhancement time. According to Retinex theory, the characteristic information of robot visual image is obtained, the logarithmic domain operation form of Retinex algorithm is obtained, the robot visual reflection image of high-frequency part is determined, the robot illumination visual image is estimated by multiscale filter, and the scale constant of Gaussian filter is obtained; According to the Retinex algorithm of weighted guided filtering, the robot visual image enhancement process is designed. The experimental results show that the average value of the robot visual image enhanced by this method is 88.63, the standard deviation is 62.78, the information entropy is 8.18, the robot visual image enhancement time is only 5.9s, and the PSNR of the robot visual image is up to 39.92, which proves that the robot visual image enhancement effect of this method is good.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43985195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Information fusion approach for downscaling coarse resolution scatterometer data","authors":"A. Maurya, A. Kukunuri, D. Singh","doi":"10.1080/19479832.2022.2144955","DOIUrl":"https://doi.org/10.1080/19479832.2022.2144955","url":null,"abstract":"ABSTRACT The applications of scatterometer data (σ°) are limited due to their coarser resolution (25–50 km). Some image reconstruction techniques are available to generate high-resolution products, but they require various sensor parameters and multiset observation, making them complex to use. Therefore, this paper proposes an information fusion approach to disaggregate the coarse resolution σ° product. The coarse resolution backscattering signal includes the contribution from more than one land cover class, such as short vegetation, soil, urban and tall vegetation, the information of which can be obtained from normalised difference vegetation index (NDVI), vegetation temperature condition index (VTCI), and fraction cover of urban and forests, respectively. Disaggregating this coarse resolution pixel, an optimum weight information is required that provides the distribution of each class. Since the distribution of land cover classes is not homogeneous for every pixel, a variance-based fusion approach has been used to obtain the optimum weight factors to fuse NDVI, VTCI, and fraction cover. These weight factors are used to disaggregate every coarse-resolution pixel into high-resolution pixels. The developed model is applied to Sentinel-1 and Scatsat-1 level-3 products, and the obtained results are quite satisfactory.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48503442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangguo Lin, W. Xie, Libo Zhang, H. Sang, Jing Shen, S. Cui
{"title":"Semi-automatic road extraction from high resolution satellite images by template matching using Kullback–Leibler divergence as a similarity measure","authors":"Xiangguo Lin, W. Xie, Libo Zhang, H. Sang, Jing Shen, S. Cui","doi":"10.1080/19479832.2022.2121767","DOIUrl":"https://doi.org/10.1080/19479832.2022.2121767","url":null,"abstract":"ABSTRACT Semi-automatic extraction of roads is greatly needed to accelerate the acquisition and updating of road maps. However, road surfaces are frequently disturbed on very high spatial resolution (VHSR) remotely sensed satellite imagery, which bothers the road trackers using least-squares-based template matching. This paper presents a novel semi-automatic framework for road tracking from VHSR satellite imagery. First, a human operator inputs three seed points. Second, the computer automatically tracks the road by the template matching using Kullback–Leibler divergence as a similarity measure. At the same time, a human operator is retained in the tracking process to supervise the extracted results, to response to the program’s prompts. Once the failure or error happens, the human operator will correct the results and restart the automatic tracking. The above procedure is repeated until a whole road is tracked. Four satellite images with different complexities are used to perform experiments. The results show that our proposed road trackers is capable of automatically, accurately and fast extracting the long and high-level roads from the VHSR satellite images.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42488127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Yuan, Wenbo Wu, Shuli Dong, Q. He, Feiran Zhang
{"title":"A High Dynamic Range Image Fusion Method Based on Dual Gain Image","authors":"Li Yuan, Wenbo Wu, Shuli Dong, Q. He, Feiran Zhang","doi":"10.1080/19479832.2022.2116492","DOIUrl":"https://doi.org/10.1080/19479832.2022.2116492","url":null,"abstract":"ABSTRACT For a camera with automatic gain control, two images with high and low optical gain can be output at the same exposure time. Due to the small gain value, most of target details are hidden in the dark pixels for the low gain image, and the brightness saturation usually appears in high gain image for the high luminance areas. To obtain the essential information from the dual gain images, a generation method of high dynamic range image based on dual gain image was developed. The method is composed of five parts, including enhancement of image detail, establishment of Laplacian pyramid, selection of fusion operator, reconstruction of fusion pyramid and adjustment of image contrast. Results showed that combination of the gradient operator for N-1 layer and the neighbourhood filter operator for the Nth layer had better fusion effect. Moreover, based on the analysis of image information entropy and clarity, the fusion efficiency was calculated, and the fusion efficiency of Mertens’s method, Jiang’s method, Zhang’s method, Goshtasby’s method and the presented method was 30.5%, 33.5%, 39.5%, 51% and 99%, indicating that the HDR fusion method based on dual gain image is reliable.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43077309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Unsupervised self-training method based on deep learning for soil moisture estimation using synergy of sentinel-1 and sentinel-2 images","authors":"A. Ben Abbes, N. Jarray","doi":"10.1080/19479832.2022.2106317","DOIUrl":"https://doi.org/10.1080/19479832.2022.2106317","url":null,"abstract":"ABSTRACT Here, we present a novel unsupervised self-training method (USTM) for SM estimation. First, a ML model is trained using the labeled and unlabeled data. Then, the pseudo-labeled data are generated employing the second model by adding a proxy labeled data. Eventually, SM is estimated applying the third model by pseudo-labeled data generated by the second model and unlabeled data. The final SM estimation result is obtained by training the third model. Subsequently, in-situ measurements are performed to validate our method. The final model is an unsupervised learning model. Experiments were carried out at two different sites located in southern Tunisia using Sentinel-1A and Sentinel-2A data. The input data include the backscatter coefficient in two-mode polarization ( and ), derived from Sentinel-1A, as well as the Normalized Difference Vegetation Index (NDVI) and the Normalized Difference Infrared Index (NDII) for Sentinel-2A and in-situ data. The USTM method based on (Random Forest (RF)- Convolutional neural network (CNN)-CNN) combination allowed obtaining the best performance and precision rate, compared to other combinations (Artificial Neural Network (ANN)-CNN-CNN) and (eXtreme Gradient Boosting (XGBoost)-CNN-CNN).","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48358633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of focal loss based deep neural networks for traffic sign detection","authors":"Deepika Kamboj, Sharda Vashisth, Sumeet Saurav","doi":"10.1080/19479832.2022.2086304","DOIUrl":"https://doi.org/10.1080/19479832.2022.2086304","url":null,"abstract":"ABSTRACT With advancements in autonomous driving, demand for stringent and computationally efficient traffic sign detection systems has increased. However, bringing such a system to a deployable level requires handling critical accuracy and processing speed issues. A focal loss-based single-stage object detector, i.e RetinaNet, is used as a trade-off between accuracy and processing speed as it handles the class imbalance problem of the single-stage detector and is thus suitable for traffic sign detection (TSD). We assessed the detector’s performance by combining various feature extractors such as ResNet-50, ResNet-101, and ResNet-152 on three publicly available TSD benchmark datasets. Performance comparison of the detector using different backbone includes evaluation parameters like mean average precision (mAP), memory allocation, running time, and floating-point operations. From the evaluation results, we found that the RetinaNet object detector using the ResNet-152 backbone obtains the best mAP, while that using ResNet-101 strikes the best trade-off between accuracy and execution time. The motivation behind benchmarking the detector on different datasets is to analyse the detector’s performance on different TSD benchmark datasets. Among the three feature extractors, the RetinaNet model trained using the ResNet-50 backbone is an excellent model in memory consumption, making it an optimal choice for low-cost embedded devices deployment.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44159531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. H. Handayani, Arizal Bawasir, A. Cahyono, T. Hariyanto, H. Hidayat
{"title":"Surface drainage features identification using LiDAR DEM smoothing in agriculture area: a study case of Kebumen Regency, Indonesia","authors":"H. H. Handayani, Arizal Bawasir, A. Cahyono, T. Hariyanto, H. Hidayat","doi":"10.1080/19479832.2022.2076160","DOIUrl":"https://doi.org/10.1080/19479832.2022.2076160","url":null,"abstract":"ABSTRACT Digital Elevation Model (DEM) is the most vital data to generate drainage networks and to provide critical terrain factors and hydrologic derivatives, such as slope, aspect, and streamflow. The accuracy of generated drainage features is extensively dependent on the quality and resolution of DEM, such as LiDAR-derived DEM. Contrary, it has a high level of roughness and complexity. Thus, smoothing methods are sometimes employed to conquer the roughness. This paper presents feature-preserving DEM smoothing (FPDEM-S) and edge-preserving DEM smoothing (EPDEM-S) approaches to smooth surface complexity in kind of preserving small drainage features using the 0.5 m – resolution LiDAR DEM of the Kedungbener River area in Kebumen Regency, Indonesia. Entangling linear morphometric factors, those smoothing approaches delivered a slight difference of stream number, with the FPDEM-S stream length ratio performing 7% better tendencies. The FPDEM-S method perormed better than EPDEM-S in this study area to provide an optimal smoothed LiDAR DEM at certain parameter values. Summarising that two smoothing methods approaches performed similar characteristics of watershed as an oval structure close to the circular shape. Also, it can be revealed that the watershed did not reach maturity phase.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":null,"pages":null},"PeriodicalIF":2.3,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46974091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}