International Journal of Image and Data Fusion最新文献

筛选
英文 中文
A fusion method for infrared and visible images based on iterative guided filtering and two channel adaptive pulse coupled neural network 基于迭代引导滤波和双通道自适应脉冲耦合神经网络的红外和可见光图像融合方法
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-08-31 DOI: 10.1080/19479832.2020.1814877
Qiufeng Fan, F. Hou, Feng Shi
{"title":"A fusion method for infrared and visible images based on iterative guided filtering and two channel adaptive pulse coupled neural network","authors":"Qiufeng Fan, F. Hou, Feng Shi","doi":"10.1080/19479832.2020.1814877","DOIUrl":"https://doi.org/10.1080/19479832.2020.1814877","url":null,"abstract":"ABSTRACT In order to make full use of the important features of the source image, an infrared and visible fusion method based on iterative guided filtering and two-channel adaptive pulse coupled neural network is proposed. The input image is decomposed into basic layer, small scale layer and large scale layer by an iterative guide filter. The base layer is fused by combining pixel energy and gradient energy. Then we fuse the large scale layer and small scale layer via two-channel adaptive pulse coupled neural network. The fused image is obtained by the inverse mixing multi-scale decomposition method. Experimental results show that compared with other multi-scale decomposition methods, the proposed method can better separate spatial overlapping features, and preserve more detailed information in fused image, effectively suppress artefacts.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"23 - 47"},"PeriodicalIF":2.3,"publicationDate":"2020-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1814877","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45102494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A multi-focus image fusion method based on watershed segmentation and IHS image fusion 一种基于分水岭分割和IHS图像融合的多焦点图像融合方法
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-07-29 DOI: 10.1080/19479832.2020.1791262
Shaheera Rashwan, A.E. Youssef, B. A. Youssef
{"title":"A multi-focus image fusion method based on watershed segmentation and IHS image fusion","authors":"Shaheera Rashwan, A.E. Youssef, B. A. Youssef","doi":"10.1080/19479832.2020.1791262","DOIUrl":"https://doi.org/10.1080/19479832.2020.1791262","url":null,"abstract":"ABSTRACT High magnification optical cameras, such as microscopes or macro-photography, cannot capture an object that is totally in focus. In this case, image acquisition is done by capturing the object/scene with the camera using a set of images with different focuses, then fusing to produce an ‘all-in-focus’ image that is clear everywhere. This process is called multi-focus image fusion. In this paper, a method named Watershed on Intensity Hue Saturation (WIHS) is proposed to fuse multi-focus images. First, the defocused images are fused using IHS image fusion. Then the marker controlled watershed segmentation algorithm is utilized to segment the fused image. Finally, the Sum-Modified of Laplacian is applied to measure the focus of multi-focus images on each region and the region with higher focus measure is chosen from its corresponding image to compute the all-in- focus resulted image. The experiment results show that WIHS has best performance in quantitative comparison with other methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"176 - 184"},"PeriodicalIF":2.3,"publicationDate":"2020-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1791262","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45069348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Practical applications on sustainable development goals in IJIDF 可持续发展目标在IJIDF中的实际应用
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-07-02 DOI: 10.1080/19479832.2020.1755083
Jixian Zhang
{"title":"Practical applications on sustainable development goals in IJIDF","authors":"Jixian Zhang","doi":"10.1080/19479832.2020.1755083","DOIUrl":"https://doi.org/10.1080/19479832.2020.1755083","url":null,"abstract":"Since the inception of Sustainable Development Goals (SDGs) by United Nations General Assembly in 2015, the 2030 Agenda has provided a blueprint for shared prosperity in a sustainable world where a...","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"215 - 217"},"PeriodicalIF":2.3,"publicationDate":"2020-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1755083","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43402067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accuracy analysis of Bluetooth-Low-Energy ranging and positioning in NLOS environment 近视距环境下蓝牙低能量测距定位精度分析
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-05-25 DOI: 10.1080/19479832.2020.1752314
Deng Yang, Jian Wang, Minmin Wang, Houzeng Han, Yalei Zhang
{"title":"Accuracy analysis of Bluetooth-Low-Energy ranging and positioning in NLOS environment","authors":"Deng Yang, Jian Wang, Minmin Wang, Houzeng Han, Yalei Zhang","doi":"10.1080/19479832.2020.1752314","DOIUrl":"https://doi.org/10.1080/19479832.2020.1752314","url":null,"abstract":"ABSTRACT Aiming at the low accuracy of Bluetooth-Low-Energy ranging and positioning in NLOS environment, a non-line of sight (NLOS) Bluetooth-Low-Energy ranging method based on the NLOS Bluetooth-Low-Energy ranging model and an NLOS Bluetooth-Low-Energy positioning algorithm based on the TLS method and the triangular positioning algorithm are proposed. Firstly, an line-of-sight (LOS) Bluetooth-Low-Energy ranging model is established by RSSI value and actual distance between the Bluetooth-Low-Energy beacon and the terminal equipment. Secondly, based on the LOS Bluetooth-Low-Energy ranging model and the threshold-corrected RSSI peak, an NLOS Bluetooth-Low-Energy ranging model is established. Thirdly, the NLOS RSSI value is processed by the NLOS Bluetooth-Low-Energy ranging model to obtain the optimal ranging value. Finally, high precision positioning coordinates can be obtained by using NLOS Bluetooth-Low-Energy positioning algorithm and optimal Bluetooth-Low-Energy ranging. In this paper, two experiments are carried out, and the experimental results show that: in the range of 7 m, the average ranging accuracy of the improved NLOS Bluetooth-Low-Energy ranging method is 0.37 m, which is improved 49.19% when compared to the traditional method’s results. The average positioning accuracy is 0.4 m using the positioning algorithm proposed in this paper. Therefore, in NLOS environment, the proposed ranging method and positioning algorithm can significantly improve the accuracy of Bluetooth-Low-Energy ranging and positioning.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"356 - 374"},"PeriodicalIF":2.3,"publicationDate":"2020-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1752314","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48013980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Enabling real-time and high accuracy tracking with COTS RFID devices 使用COTS RFID设备实现实时高精度跟踪
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-04-21 DOI: 10.1080/19479832.2020.1752315
K. Zhao, Binghao Li
{"title":"Enabling real-time and high accuracy tracking with COTS RFID devices","authors":"K. Zhao, Binghao Li","doi":"10.1080/19479832.2020.1752315","DOIUrl":"https://doi.org/10.1080/19479832.2020.1752315","url":null,"abstract":"ABSTRACT RFID technology has been widely used for object tracking in the industry. Very high accuracy (centimetre level) positioning based on RFID carrier phase measurement has been reported. However, most of the proposed fine-grained tracking methods can only work with very strict preconditions. For example, some methods require either the reader or the tag to move along a certain one dimensional track at constant speed, while other methods need pre-deployed tags at known locations as reference points. This paper proposes a new approach which can track RFID tags without knowing the speed, the track and the initial location of the tag, and requiring only the antenna coordinates. The experiment results show that the algorithm can converge within 10 seconds and the average positioning accuracy can be at the centimetre level or even sub-centimetre level under different preconditions. The algorithm has also been optimised with a prediction-updating procedure to meet the requirements of post-processing and real-time tracking.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"251 - 267"},"PeriodicalIF":2.3,"publicationDate":"2020-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1752315","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45455733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Machine learning on high performance computing for urban greenspace change detection: satellite image data fusion approach 基于高性能计算的机器学习城市绿地变化检测:卫星图像数据融合方法
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-04-10 DOI: 10.1080/19479832.2020.1749142
Nilkamal More, V. Nikam, Biplab Banerjee
{"title":"Machine learning on high performance computing for urban greenspace change detection: satellite image data fusion approach","authors":"Nilkamal More, V. Nikam, Biplab Banerjee","doi":"10.1080/19479832.2020.1749142","DOIUrl":"https://doi.org/10.1080/19479832.2020.1749142","url":null,"abstract":"ABSTRACT Green spaces serve important environmental and quality-of-life functions in urban environments. Fast-changing urban regions require continuous and fast green space change detection. This study focuses on assessment of green space change detection using GPU- for time efficient green space identification and monitoring. Using spatio-temporal data from satellite images and a support vector machine (SVM) as a classification algorithm, this research proposes a platform for green space analysis and change detection. The main contributions of this research include the fusion of the thermal band in addition to Near infra-red, red, green band with the fusion of high spectral information of the moderate resolution imaging spectroradiometer (MODIS) dataset and high spatial information of the LANDSAT 7 dataset. The novel method is employed to calculate the total green space area in the Mumbai metropolitan area and monitor the changes from 2005–2019. This research paper discusses the findings of our strategy and reveals that over the course of 15 years the overall green space was reduced to 50%.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"218 - 232"},"PeriodicalIF":2.3,"publicationDate":"2020-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1749142","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44491658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A multi-sensor-based evaluation of the morphometric characteristics of Opa river basin in Southwest Nigeria 基于多传感器的尼日利亚西南部Opa河流域形态特征评估
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-04-02 DOI: 10.1080/19479832.2019.1683622
A. O. Adewole, Felix Ike, A. Eludoyin
{"title":"A multi-sensor-based evaluation of the morphometric characteristics of Opa river basin in Southwest Nigeria","authors":"A. O. Adewole, Felix Ike, A. Eludoyin","doi":"10.1080/19479832.2019.1683622","DOIUrl":"https://doi.org/10.1080/19479832.2019.1683622","url":null,"abstract":"ABSTRACT Studies have shown that many river basins in the sub-Saharan Africa are largely unmonitored, partly because they are poorly or totally ungauged. In this study, remote sensing products (Landsat, Advanced Spaceborne Thermal Emission and Reflection Radiometer; ASTER and Shuttle Radar Topography Mission; SRTM) that are freely available in the region were harnessed for the monitoring of Opa river basin in southwestern Nigeria. The remote sensing products were complementarily used with topographical sheets (1:50,000), ground based observation and global positioning systems to determine selected morphometric characteristics as well as changes in landuse/landcover and its impact on peak runoff in the Opa river basin. Results showed that the basin is a 5th order basin whose land area has been subjected to different natural and anthropogenic influences within the study period. Urbanisation is a major factor that threatens the basin with degradation and observed changes, and the threats are expected to become worse if restoration is not considered from some tributaries. The study concluded that commentary use of available remote sensing products in the region will provide an important level of decision support information for management and monitoring of river basins.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"185 - 200"},"PeriodicalIF":2.3,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1683622","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47608661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Video-based salient object detection using hybrid optimisation strategy and contourlet mapping 基于混合优化策略和contourlet映射的视频显著目标检测
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-04-02 DOI: 10.1080/19479832.2019.1683625
S. A., H. N. Suresh
{"title":"Video-based salient object detection using hybrid optimisation strategy and contourlet mapping","authors":"S. A., H. N. Suresh","doi":"10.1080/19479832.2019.1683625","DOIUrl":"https://doi.org/10.1080/19479832.2019.1683625","url":null,"abstract":"ABSTRACT The advancements in salient object detection have attracted many researchers and are significant in several computer vision applications. However, efficient salient object detection using still images is a major challenge. This paper proposes salient object detection technique using the proposed Spider-Gray Wolf Optimiser (S-GWO) algorithm that is designed by combining Gray Wolf Optimiser (GWO) and Spider Monkey Optimisation (SMO). The technique undergoes three steps, which involves keyframe extraction, saliency mapping, contourlet mapping, and then, fusion of obtained outputs using optimal coefficients. Initially, the extracted frames are subjected to saliency mapping and contourlet mapping simultaneously in order to determine the quality of each pixel. Then, the outputs obtained from the saliency mapping and contourlet mapping is fused using the arbitrary coefficients for obtaining the final result that is employed for detecting the salient objects. Here, the proposed S-GWO is employed for selecting the optimal coefficients for detecting the salient objects. The experimental evaluation of the proposed S-GWO based on the performance metrics reveals that the proposed S-GWO attained a maximal accuracy, sensitivity and specificity with 0.914, 0.861 and 0.929, respectively.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"162 - 184"},"PeriodicalIF":2.3,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2019.1683625","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44951669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Improved auto-extrinsic calibration between stereo vision camera and laser range finder 改进了立体视觉相机与激光测距仪之间的自外部标定
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-02-28 DOI: 10.1080/19479832.2020.1727574
Archana Khurana, K. S. Nagla
{"title":"Improved auto-extrinsic calibration between stereo vision camera and laser range finder","authors":"Archana Khurana, K. S. Nagla","doi":"10.1080/19479832.2020.1727574","DOIUrl":"https://doi.org/10.1080/19479832.2020.1727574","url":null,"abstract":"ABSTRACT This study identifies a way to accurately estimate extrinsic calibration parameters between stereo vision camera and 2D laser range finder (LRF) based on 3D reconstruction of monochromatic calibration board and geometric co-planarity constraints between the views from these two sensors. It supports automatic extraction of plane-line correspondences between camera and LRF using monochromatic board, which is further improved by selecting optimal threshold values for laser scan dissection to extract line features from LRF data. Calibration parameters are then obtained by solving co-planarity constraints between the estimated plane and line. Furthermore, the obtained parameters are refined by minimising reprojection error and error from the co-planarity constraints. Moreover, calibration accuracy is achieved because of extraction of reliable plane-line correspondence using monochromatic board which reduces the impact of range-reflectivity-bias observed in LRF data on checkerboard . As the proposed method supports to automatically extract feature correspondences, it provides a major reduction in time required from an operator in comparison to manual methods. The performance is validated by extensive experimentation and simulation, and estimated parameters from the proposed method demonstrate better accuracy than conventional methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"12 1","pages":"122 - 154"},"PeriodicalIF":2.3,"publicationDate":"2020-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1727574","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42560550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images 大尺度光学遥感影像机场检测的多尺度融合优化方法
IF 2.3
International Journal of Image and Data Fusion Pub Date : 2020-02-20 DOI: 10.1080/19479832.2020.1727573
Shoulin Yin, Hang Li, Lin Teng, Man Jiang, Shahid Karim
{"title":"An optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images","authors":"Shoulin Yin, Hang Li, Lin Teng, Man Jiang, Shahid Karim","doi":"10.1080/19479832.2020.1727573","DOIUrl":"https://doi.org/10.1080/19479832.2020.1727573","url":null,"abstract":"ABSTRACT Airport detection in remote sensing images is an important process which plays a significant role in military and civil areas. Mostly, conventional algorithms have been used for airport detection from a small-scale remote sensing image and revealed the less efficient ability of searching the object from a large-scale high-resolution remote sensing image. The computational complexity of these algorithms is high and these are not useful for rapid localisation with high detection accuracy in high-resolution remote sensing images. Aiming to solve the above problems, we propose an optimised multi-scale fusion method for airport detection in large-scale optical remote sensing images. Firstly, we execute discrete wavelet multi-scale decomposition for remote sensing image and extract the multiple features of the object in each sub-band. Secondly, the fusion rule based on the optimised region selection is used to fuse the features on each scale. Meanwhile, singular-value decomposition (SVD) is utilised for fusing low-frequency and principal component analysis (PCA) is utilised to fuse the high-frequency, respectively. Thirdly, the final-fused image is acquired by weighted fusion. Finally, the selective search method is employed to detect the airport in the fused image. Experimental results show that the detection accuracy is better than the other state-of-the-art methods.","PeriodicalId":46012,"journal":{"name":"International Journal of Image and Data Fusion","volume":"11 1","pages":"201 - 214"},"PeriodicalIF":2.3,"publicationDate":"2020-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/19479832.2020.1727573","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44338123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 30
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信