{"title":"MCTN-Net:结合方向和语义特征的多类别交通网络提取方法","authors":"Chenglin Shao;Huifang Li;Huanfeng Shen","doi":"10.1109/LGRS.2024.3372194","DOIUrl":null,"url":null,"abstract":"Transportation network extraction based on deep learning has become a hotspot. However, the existing models all aim to distinguish between background and transportation networks, while ignoring the class attributes within the transportation networks. In this letter, we propose a multiclass transportation network extraction network (MCTN-Net) to simultaneously extract railways, roadways, trails, and bridges. Inspired by multitask learning, the network first extracts the orientation and semantic information together by the use of a dense feature shared encoder (DFSE). The orientation and semantic features are then fused in the orientation-guided stacking module (OGSM) to enhance the connection between transportation network pixels. Furthermore, a semantic refinement branch (SRB) is designed to improve the ability to classify different transportation network types through deep supervised fusion and class attention. A multiclass transportation network dataset (MCTN dataset) was constructed and used in the experiments. The experiential results indicate that the proposed method achieves a mean intersection over union (MIoU) of 64.29% and a frequency-weighted intersection over union (FWIoU) of 71.20% without the background, which is significantly better than the other road extraction models and semantic segmentation methods. The code and dataset are available at \n<uri>https://github.com/fzzfRS/MCTN-Net</uri>\n.","PeriodicalId":91017,"journal":{"name":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","volume":"21 ","pages":"1-5"},"PeriodicalIF":4.4000,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"MCTN-Net: A Multiclass Transportation Network Extraction Method Combining Orientation and Semantic Features\",\"authors\":\"Chenglin Shao;Huifang Li;Huanfeng Shen\",\"doi\":\"10.1109/LGRS.2024.3372194\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Transportation network extraction based on deep learning has become a hotspot. However, the existing models all aim to distinguish between background and transportation networks, while ignoring the class attributes within the transportation networks. In this letter, we propose a multiclass transportation network extraction network (MCTN-Net) to simultaneously extract railways, roadways, trails, and bridges. Inspired by multitask learning, the network first extracts the orientation and semantic information together by the use of a dense feature shared encoder (DFSE). The orientation and semantic features are then fused in the orientation-guided stacking module (OGSM) to enhance the connection between transportation network pixels. Furthermore, a semantic refinement branch (SRB) is designed to improve the ability to classify different transportation network types through deep supervised fusion and class attention. A multiclass transportation network dataset (MCTN dataset) was constructed and used in the experiments. The experiential results indicate that the proposed method achieves a mean intersection over union (MIoU) of 64.29% and a frequency-weighted intersection over union (FWIoU) of 71.20% without the background, which is significantly better than the other road extraction models and semantic segmentation methods. The code and dataset are available at \\n<uri>https://github.com/fzzfRS/MCTN-Net</uri>\\n.\",\"PeriodicalId\":91017,\"journal\":{\"name\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"volume\":\"21 \",\"pages\":\"1-5\"},\"PeriodicalIF\":4.4000,\"publicationDate\":\"2024-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/10456894/\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE geoscience and remote sensing letters : a publication of the IEEE Geoscience and Remote Sensing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10456894/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
MCTN-Net: A Multiclass Transportation Network Extraction Method Combining Orientation and Semantic Features
Transportation network extraction based on deep learning has become a hotspot. However, the existing models all aim to distinguish between background and transportation networks, while ignoring the class attributes within the transportation networks. In this letter, we propose a multiclass transportation network extraction network (MCTN-Net) to simultaneously extract railways, roadways, trails, and bridges. Inspired by multitask learning, the network first extracts the orientation and semantic information together by the use of a dense feature shared encoder (DFSE). The orientation and semantic features are then fused in the orientation-guided stacking module (OGSM) to enhance the connection between transportation network pixels. Furthermore, a semantic refinement branch (SRB) is designed to improve the ability to classify different transportation network types through deep supervised fusion and class attention. A multiclass transportation network dataset (MCTN dataset) was constructed and used in the experiments. The experiential results indicate that the proposed method achieves a mean intersection over union (MIoU) of 64.29% and a frequency-weighted intersection over union (FWIoU) of 71.20% without the background, which is significantly better than the other road extraction models and semantic segmentation methods. The code and dataset are available at
https://github.com/fzzfRS/MCTN-Net
.