{"title":"Cross-form efficient attention pyramidal network for semantic image segmentation","authors":"Anamika Maurya, S. Chand","doi":"10.3233/aic-210266","DOIUrl":null,"url":null,"abstract":"Although convolutional neural networks (CNNs) are leading the way in semantic segmentation, standard methods still have some flaws. First, there is feature redundancy and less discriminating feature representations. Second, the number of effective multi-scale features is limited. In this paper, we aim to solve these constraints with the proposed network that utilizes two effective pre-trained models as an encoder. We develop a cross-form attention pyramid that acquires semantically rich multi-scale information from local and global priors. A spatial-wise attention module is introduced to further enhance the segmentation findings. It highlights more discriminating regions of low-level features to focus on significant location information. We demonstrate the efficacy of the proposed network on three datasets, including IDD Lite, PASCAL VOC 2012, and CamVid. Our model achieves a mIoU score of 70.7% on the IDD Lite, 83.98% on the PASCAL VOC 2012, and 73.8% on the CamVid dataset.","PeriodicalId":50835,"journal":{"name":"AI Communications","volume":"50 1","pages":"225-242"},"PeriodicalIF":1.4000,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI Communications","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.3233/aic-210266","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 1
Abstract
Although convolutional neural networks (CNNs) are leading the way in semantic segmentation, standard methods still have some flaws. First, there is feature redundancy and less discriminating feature representations. Second, the number of effective multi-scale features is limited. In this paper, we aim to solve these constraints with the proposed network that utilizes two effective pre-trained models as an encoder. We develop a cross-form attention pyramid that acquires semantically rich multi-scale information from local and global priors. A spatial-wise attention module is introduced to further enhance the segmentation findings. It highlights more discriminating regions of low-level features to focus on significant location information. We demonstrate the efficacy of the proposed network on three datasets, including IDD Lite, PASCAL VOC 2012, and CamVid. Our model achieves a mIoU score of 70.7% on the IDD Lite, 83.98% on the PASCAL VOC 2012, and 73.8% on the CamVid dataset.
期刊介绍:
AI Communications is a journal on artificial intelligence (AI) which has a close relationship to EurAI (European Association for Artificial Intelligence, formerly ECCAI). It covers the whole AI community: Scientific institutions as well as commercial and industrial companies.
AI Communications aims to enhance contacts and information exchange between AI researchers and developers, and to provide supranational information to those concerned with AI and advanced information processing. AI Communications publishes refereed articles concerning scientific and technical AI procedures, provided they are of sufficient interest to a large readership of both scientific and practical background. In addition it contains high-level background material, both at the technical level as well as the level of opinions, policies and news.