Chao You;Licheng Jiao;Lingling Li;Xu Liu;Fang Liu;Wenping Ma;Shuyuan Yang
{"title":"Contour Knowledge-Aware Perception Learning for Semantic Segmentation","authors":"Chao You;Licheng Jiao;Lingling Li;Xu Liu;Fang Liu;Wenping Ma;Shuyuan Yang","doi":"10.1109/TCSVT.2024.3515088","DOIUrl":null,"url":null,"abstract":"The diversity of contextual information is of great importance for accurate semantic segmentation. However, most methods focus on single spatial contextual information, which results in an overlap of the semantic content of categories and a loss of contour information of objects. In this article, we propose a novel contour knowledge-aware perception learning network (CKPL-Net) to capture diverse contextual information by space-category aggregation module (SCAM) and contour-aware calibration module (CACM). First, SCAM is introduced to enhance intraclass consistency and interclass differentiation of features. By integrating space-aware and category-aware attention, SCAM reduces the redundancy of features from a categorical perspective while maintaining spatial correlation of pixels, substantially avoiding the overlap of the semantic content in categories. Second, CACM is designed to maintain the integrity of objects by perceiving contour contextual information. It develops a novel contour-aware knowledge and adaptively transforms the grid structure of convolutions for boundary pixels, which effectively calibrates the representation of features near boundaries. Finally, the quantitative and qualitative analyses on the three public datasets: ISPRS Potsdam dataset, ISPRS Vaihingen dataset, and WHDLD dataset, demonstrate that the proposed CKPL-Net achieves superior performance compared with prevalent methods, which indicates diverse contextual information is beneficial for accurate segmentation.","PeriodicalId":13082,"journal":{"name":"IEEE Transactions on Circuits and Systems for Video Technology","volume":"35 5","pages":"4560-4575"},"PeriodicalIF":8.3000,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Circuits and Systems for Video Technology","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10793424/","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The diversity of contextual information is of great importance for accurate semantic segmentation. However, most methods focus on single spatial contextual information, which results in an overlap of the semantic content of categories and a loss of contour information of objects. In this article, we propose a novel contour knowledge-aware perception learning network (CKPL-Net) to capture diverse contextual information by space-category aggregation module (SCAM) and contour-aware calibration module (CACM). First, SCAM is introduced to enhance intraclass consistency and interclass differentiation of features. By integrating space-aware and category-aware attention, SCAM reduces the redundancy of features from a categorical perspective while maintaining spatial correlation of pixels, substantially avoiding the overlap of the semantic content in categories. Second, CACM is designed to maintain the integrity of objects by perceiving contour contextual information. It develops a novel contour-aware knowledge and adaptively transforms the grid structure of convolutions for boundary pixels, which effectively calibrates the representation of features near boundaries. Finally, the quantitative and qualitative analyses on the three public datasets: ISPRS Potsdam dataset, ISPRS Vaihingen dataset, and WHDLD dataset, demonstrate that the proposed CKPL-Net achieves superior performance compared with prevalent methods, which indicates diverse contextual information is beneficial for accurate segmentation.
期刊介绍:
The IEEE Transactions on Circuits and Systems for Video Technology (TCSVT) is dedicated to covering all aspects of video technologies from a circuits and systems perspective. We encourage submissions of general, theoretical, and application-oriented papers related to image and video acquisition, representation, presentation, and display. Additionally, we welcome contributions in areas such as processing, filtering, and transforms; analysis and synthesis; learning and understanding; compression, transmission, communication, and networking; as well as storage, retrieval, indexing, and search. Furthermore, papers focusing on hardware and software design and implementation are highly valued. Join us in advancing the field of video technology through innovative research and insights.