D. Bolkas, Jeffrey Chiampi, J. Chapman, Vincent F. Pavill
{"title":"Creating a virtual reality environment with a fusion of sUAS and TLS point-clouds","authors":"D. Bolkas, Jeffrey Chiampi, J. Chapman, Vincent F. Pavill","doi":"10.1080/19479832.2020.1716861","DOIUrl":"https://doi.org/10.1080/19479832.2020.1716861","url":null,"abstract":"ABSTRACT In recent years, immersive virtual reality has been used in disciplines such as engineering, sciences, and education. Point-cloud technologies such as laser scanning and unmanned aerial systems have become important for creating virtual environments. This paper discusses creating virtual environments from 3D point-cloud data suitable for immersive and interactive virtual reality. Both laser scanning and sUAS point-clouds are utilised. These point-clouds are merged using a custom-made algorithm that identifies data gaps in the master dataset (laser scanner) and fills them with data from a slave dataset (sUAS) resulting in a more complete dataset that is used for terrain modelling and 3D modelling of objects. The terrain and 3D objects are then textured with custom-made and free textures to provide a sense of realism in the objects. The created virtual environment is a digital copy of a part of the Penn State Wilkes-Barre campus. This virtual environment will be used in immersive and interactive surveying laboratories to assess the role of immersive virtual reality in surveying engineering education.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"136 - 161"},"PeriodicalIF":2.3,"publicationDate":"2020-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1716861","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43747254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiaowei Wang, Shoulin Yin, Desheng Liu, Hang Li, Shahid Karim
{"title":"Accurate playground localisation based on multi-feature extraction and cascade classifier in optical remote sensing images","authors":"Xiaowei Wang, Shoulin Yin, Desheng Liu, Hang Li, Shahid Karim","doi":"10.1080/19479832.2020.1716862","DOIUrl":"https://doi.org/10.1080/19479832.2020.1716862","url":null,"abstract":"ABSTRACT To address the low accuracy problem of playground detection under complex background, the accurate playground localization based on multi-feature extraction and cascade classifier is proposed in this paper. It is difficult to utilize this information to separate objects from the complex background. Therefore, we adopt multi-feature extraction method to make the playgrounds more easily to be detected. The proposed localization method is partitioned into two modules: feature extraction and classification. First, multi feature extraction method combining histogram of oriented gradients (HOG) and Haar is utilized to extract features from raw images. HOG can authentically capture the shape information, which is extracted to characterize the local region. Haar can improve the image eigenvalue calculation effectively. Afterwards, cascade classifier based on AdaBoost algorithm is adopted to classify the extracted features. Finally we conduct the experiments with our proposed methodology on a publicly accessible remote sensing images from Google Earth. The results demonstrate that the proposed framework has a better effect with achieving high levels of recall, precision and F-score compared to the state-of-the-art alternatives, without sacrificing computational soundness. What is more, the results indicate that the proposed playground 1ocalisation method has strong robustness under different complex backgrounds with high detection rate.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"233 - 250"},"PeriodicalIF":2.3,"publicationDate":"2020-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1716862","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47839197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An overview of deep learning methods for image registration with focus on feature-based approaches","authors":"Kavitha Kuppala, Sandhya Banda, Thirumala Rao Barige","doi":"10.1080/19479832.2019.1707720","DOIUrl":"https://doi.org/10.1080/19479832.2019.1707720","url":null,"abstract":"ABSTRACT Image registration is an essential pre-processing step for several computer vision problems like image reconstruction and image fusion. In this paper, we present a review on image registration approaches using deep learning. The focus of the survey presented is on how conventional image registration methods such as area-based and feature-based methods are addressed using deep net architectures. Registration approach adopted depends on type of images and type of transformation used to describe the deformation between the images in an application. We then present a comparative performance analysis of convolutional neural networks that have shown good performance across feature extraction, matching and transformation estimation in featured-based registration. Experimentation is done on each of these approaches using a dataset of aerial images generated by inducing deformations such as scale.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"113 - 135"},"PeriodicalIF":2.3,"publicationDate":"2020-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1707720","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46917445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of SAR and PolSAR images using deep learning: a review","authors":"Hemani Parikh, Samir B. Patel, Vibha Patel","doi":"10.1080/19479832.2019.1655489","DOIUrl":"https://doi.org/10.1080/19479832.2019.1655489","url":null,"abstract":"ABSTRACT Advancement in remote sensing technology and microwave sensors explores the applications of remote sensing in different fields. Microwave remote sensing encompasses its benefits of providing cloud-free, all-weather images and images of day and night. Synthetic Aperture Radar (SAR) images own this capability which promoted the use of SAR and PolSAR images in land use/land cover classification and various other applications for different purposes. A review of different polarimetric decomposition techniques for classification of different regions is introduced in the paper. The general objective of the paper is to help researchers in identifying a deep learning technique appropriate for SAR or PolSAR image classification. The architecture of deep networks which ingest new ideas in the given area of research are also analysed in this paper. Benchmark datasets used in microwave remote sensing have been discussed and classification results of those data are analysed. Discussion on experimental results on one of the benchmark datasets is also provided in the paper. The paper discusses challenges, scope and opportunities in research of SAR/PolSAR images which will be helpful to researchers diving into this area.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"1 - 32"},"PeriodicalIF":2.3,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1655489","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48728383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel region-based iterative seed method for the detection of multiple lanes","authors":"S. Shirke, R. Udayakumar","doi":"10.1080/19479832.2019.1683623","DOIUrl":"https://doi.org/10.1080/19479832.2019.1683623","url":null,"abstract":"ABSTRACT Most of the global automotive companies have been paid great efforts for reducing the accidents by developing an Advanced Driver Assistance System (ADAS) as well as autonomous vehicles. Lane detection is essential for both autonomous driving and ADAS because the vehicles must follow the lane. Detection of the lane is very challenging because of the varying road conditions. Lane detection has attracted the attention of the computer vision community for several decades. Essentially, lane detection is a multi-feature detection problem that has become a real challenge for computer vision and machine learning techniques. This paper presents a region-based segmentation based on iterative seed method for multi-lane detection. Here, the detection of multi-lanes is done after the segmentation, which is highly efficient and improves the computing speed. In the proposed region-based segmentation method, the segmentation of lanes from the roads is carried out by selecting the target grids, after partitioning the input image into grids. Then, based on the distance measure, the optimal segments are chosen by an iterative procedure. The performance of the proposed region-based iterative seed method is evaluated using detection accuracy, sensitivity, and specificity, where it has the maximum detection accuracy of 98.89%.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"57 - 76"},"PeriodicalIF":2.3,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1683623","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49231307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Quality assessment of fusing Sentinel-2 and WorldView-4 imagery on Sentinel-2 spectral band values: a case study of Zagreb, Croatia","authors":"Luka Rumora, M. Gašparović, Mario Miler, D. Medak","doi":"10.1080/19479832.2019.1683624","DOIUrl":"https://doi.org/10.1080/19479832.2019.1683624","url":null,"abstract":"ABSTRACT Image fusion methods aim at fusing low resolution and high-resolution image to obtain a new image that provides new information for the specific application. The main goal of this article is multitemporal Sentinel-2 image fusion using single WorldView-4 satellite image for urban area monitoring. Fusing those images should provide Sentinel-2 image with similar radiometric band value as original Sentinel-2 image, but with a spatial resolution of WorldView-4. Ehlers, Brovey Transform, Modified Intensity-Hue-Saturation, High-Pass Filtering, Hyperspherical Colour Space and Wavelet resolution merge fusion techniques were used for spatial enhancement of Sentinel-2 images. Original and fused images were first compared using standard statistical parameters, mean, median and standard deviation. Image quality analysis was conducted with different objective image quality measures like root mean square error, peak signal to noise ratio, universal image quality index, structural similarity index, relative dimensionless global error, spatial correlation coefficient, relative average spectral error, spectral angle mapper, multi-scale structural similarity index. Using these quality measures helped in determining the spectral and spatial preservation of fused images. Hyperspherical colour space method was selected as the best method for image fusion of Sentinel-2 and WorldView-4 image-based on standard statistical parameters and quality measures.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"77 - 96"},"PeriodicalIF":2.3,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1683624","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42674116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Salas, S. Subburayalu, B. Slater, K. Zhao, B. Bhattacharya, R. Tripathy, Ayan Das, R. Nigam, R. Dave, Parshva Parekh
{"title":"Mapping crop types in fragmented arable landscapes using AVIRIS-NG imagery and limited field data","authors":"E. Salas, S. Subburayalu, B. Slater, K. Zhao, B. Bhattacharya, R. Tripathy, Ayan Das, R. Nigam, R. Dave, Parshva Parekh","doi":"10.1080/19479832.2019.1706646","DOIUrl":"https://doi.org/10.1080/19479832.2019.1706646","url":null,"abstract":"ABSTRACT The fragmented nature of arable landscapes and diverse cropping patterns often thwart the precise mapping of crop types. Recent advances in remote-sensing technologies and data mining approaches offer a viable solution to this mapping problem. We demonstrated the potential of using hyperspectral imaging and an ensemble classification approach that combines five machine-learning classifiers to map crop types in the Anand District of Gujarat, India. We derived a set of narrow/broad-band indices from the Airborne Visible Infrared Imaging Spectrometer-Next Generation (AVIRIS-NG) imagery to represent spectral variations and identify target classes and their distribution patterns. The results showed that Maximum Entropy (MaxEnt) and Generalised Linear Model (GLM) had strong discriminatory image classification abilities with Area Under the Curve (AUC) values ranging between 0.75 and 0.93 for MaxEnt and between 0.73 and 0.92 for GLM. The ensemble model resulted in improved accuracy scores compared to individual models. We found the Photochemical Reflectance Index (PRI) and Moment Distance Ratio Right/Left (MDRRL) to be important predictors for target classes such as wheat, legumes, and eggplant. Results from the study revealed the potential of using one-class ensemble modelling approach and hyperspectral images with limited field dataset to map agricultural systems that are fragmented in nature.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"33 - 56"},"PeriodicalIF":2.3,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1706646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45568167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Tutu Bigdeli, Behnaz Borba, Eduardo Zilles Chen, E. Chen, Jie Chen, Wei Chen, Weihai Chen, Timo Krehl, Gunther Kumar, Brijesh Lamb, Anupama B. Lang, Stefan Li, Leida Li, Wenping Li, Xiaodong Liu, Xiaolong Madeiro
{"title":"Acknowledgement to Reviewers of the International Journal of Image and Data Fusion in 2019","authors":"Daniel Tutu Bigdeli, Behnaz Borba, Eduardo Zilles Chen, E. Chen, Jie Chen, Wei Chen, Weihai Chen, Timo Krehl, Gunther Kumar, Brijesh Lamb, Anupama B. Lang, Stefan Li, Leida Li, Wenping Li, Xiaodong Liu, Xiaolong Madeiro","doi":"10.1080/19479832.2020.1715649","DOIUrl":"https://doi.org/10.1080/19479832.2020.1715649","url":null,"abstract":"The editors of the International Journal of Image and Data Fusion wish to express their sincere gratitude to the following reviewers for their valued contribution to the journal in 2019. Abdelkareem, Mohamed Altuntas, C. Bama, B. Sathya Benefoh, Daniel Tutu Bigdeli, Behnaz Borba, Eduardo Zilles Chen, Erxue Chen, Jie Chen, Wei Chen, Weihai Chen, Yong Chiranjeevi, Karri Coelho, Leandro dos S. Dai, Qiqin De Alban, Jose Don T. Drissia, T. K. Fonseca, Leila Fryskowska, Anna Gabriel Avina Cervantes, Juan Gao, Feng Ghandehari, Mehran Gulersoy, A.E. Hiray, Yogita V. Hong, Haoyuan Hu, Xiangyun Hu, Zhaozheng Huang, Yongdong Hung, Kwok-Wai Ibarrola-Ulzurrun, Edurne Jenerowicz, Agnieszka Jiang, Junjun Jiang, Xinwei Jiao, Licheng Kainz, Wolfgang INTERNATIONAL JOURNAL OF IMAGE AND DATA FUSION 2020, VOL. 11, NO. 1, i–ii https://doi.org/10.1080/19479832.2020.1715649","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"i - ii"},"PeriodicalIF":2.3,"publicationDate":"2020-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1715649","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41636363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gang Xiao, D. P. Bavirisetti, Gang Liu, Xingchen Zhang
{"title":"Image Fusion","authors":"Gang Xiao, D. P. Bavirisetti, Gang Liu, Xingchen Zhang","doi":"10.1007/978-981-15-4867-3","DOIUrl":"https://doi.org/10.1007/978-981-15-4867-3","url":null,"abstract":"","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"34 1","pages":""},"PeriodicalIF":2.3,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90846246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DEM fusion concept based on the LS-SVM cokriging method","authors":"A. Setiyoko, A. M. Arymurthy, T. Basaruddin","doi":"10.1080/19479832.2019.1664647","DOIUrl":"https://doi.org/10.1080/19479832.2019.1664647","url":null,"abstract":"ABSTRACT Data fusion from two sources of data could develop better output since the process may minimise any inherent disadvantages of the data. Cokriging data fusion requires a semivariogram fitting process, which is an important step for weight determination in the fusion process. The traditional method of cokriging fusion usually applies a specific model of semivariogram fitting based on the available options, such as circular or tetraspherical. This research aims to fuse height point data from two different sources using ordinary kriging based on LS-SVM regression, which is applied to the semivariogram fitting process. The data used are height points generated from stereo satellite imagery, GPS measurement, and topographic map points to generate DEMs. The research experiment begins by calculating the semivariogram model for all the data, and then the fitting process is performed by applying the same approach of functional approach for both sets of data. The following process is an ordinary cokriging interpolation, whose results are analysed and compared to the ordinary kriging interpolation. The experiment results prove that the ordinary cokriging fusion process could reduce interpolation error. The LS-SVM approach offers better precision in the semivariogram modelling by determining more precise weight calculation for cokriging fusion process.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"10 1","pages":"244 - 262"},"PeriodicalIF":2.3,"publicationDate":"2019-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1664647","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48762058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}