Xiaogang Li;Di Su;Dongxu Chang;Jiajia Liu;Liwei Wang;Zhansheng Tian;Shuxuan Wang;Wei Sun
{"title":"Multi-scale Feature Extraction and Fusion Net: Research on UAVs Image Semantic Segmentation Technology","authors":"Xiaogang Li;Di Su;Dongxu Chang;Jiajia Liu;Liwei Wang;Zhansheng Tian;Shuxuan Wang;Wei Sun","doi":"10.13052/jicts2245-800X.1115","DOIUrl":null,"url":null,"abstract":"Since UAV aerial images are usually captured by UAVs at high altitudes with oblique viewing angles, the amount of data is large, and the spatial resolution changes greatly, so the information on small targets is easily lost during segmentation. Aiming at the above problems, this paper presents a semantic segmentation method for UAV images, which introduces a multi-scale feature extraction and fusion module based on the encoding-decoding framework. By combining multi-scale channel feature extraction and multi-scale spatial feature extraction, the network can focus more on certain feature layers and spatial regions when extracting features. Some invalid redundant features are eliminated and the segmentation results are optimized by introducing global context information to capture global information and detailed information. Moreover, one compares the proposed method with FCN-8s, MSDNet, and U-Net network models on the large-scale multi-class UAV dataset UAVid. The experimental results indicate that the proposed method has higher performance in both MIoU and MPA, with an overall improvement of 9.2% and 8.5%, respectively, and its prediction capability is more balanced for both large-scale and small-scale targets.","PeriodicalId":36697,"journal":{"name":"Journal of ICT Standardization","volume":"11 1","pages":"97-116"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/iel7/10251929/10261463/10261466.pdf","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of ICT Standardization","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10261466/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"Decision Sciences","Score":null,"Total":0}
引用次数: 1
Abstract
Since UAV aerial images are usually captured by UAVs at high altitudes with oblique viewing angles, the amount of data is large, and the spatial resolution changes greatly, so the information on small targets is easily lost during segmentation. Aiming at the above problems, this paper presents a semantic segmentation method for UAV images, which introduces a multi-scale feature extraction and fusion module based on the encoding-decoding framework. By combining multi-scale channel feature extraction and multi-scale spatial feature extraction, the network can focus more on certain feature layers and spatial regions when extracting features. Some invalid redundant features are eliminated and the segmentation results are optimized by introducing global context information to capture global information and detailed information. Moreover, one compares the proposed method with FCN-8s, MSDNet, and U-Net network models on the large-scale multi-class UAV dataset UAVid. The experimental results indicate that the proposed method has higher performance in both MIoU and MPA, with an overall improvement of 9.2% and 8.5%, respectively, and its prediction capability is more balanced for both large-scale and small-scale targets.