{"title":"SceneFormer: Neural Architecture Search of Transformers for Remote Sensing Scene Classification","authors":"Lyuyang Tong;Jie Liu;Bo Du","doi":"10.1109/TGRS.2025.3555207","DOIUrl":null,"url":null,"abstract":"Deep learning-based scene classification methods have long been a key research area in remote sensing imagery due to their wide-ranging applications. Recently, Transformer models have achieved significant progress in computer vision, making vision transformers (ViTs) a promising direction for scene classification. However, the spatial complexity of remote sensing imagery poses unique challenges for applying Transformers directly. Manually designing Transformers tailored for remote sensing scene classification is time-consuming under model parameter constraints and requires extensive domain expertise. To address this challenge, neural architecture search (NAS) methods provide an effective solution to construct optimal Transformer architectures for remote sensing scene classification automatically. In this work, we propose SceneFormer, an automated Transformer architecture search framework tailored for scene classification tasks. In SceneFormer, we construct a dedicated search space to search for the optimal Transformer. Moreover, we design a supernet training strategy to train numerous candidate architectures within the search space simultaneously. Furthermore, SceneFormer employs the evolutionary search to find the optimal Transformer architecture under specific resource constraints. Experiments on three high-spatial-resolution (HSR) datasets demonstrate the effectiveness of SceneFormer.","PeriodicalId":13213,"journal":{"name":"IEEE Transactions on Geoscience and Remote Sensing","volume":"63 ","pages":"1-15"},"PeriodicalIF":7.5000,"publicationDate":"2025-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Geoscience and Remote Sensing","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10942436/","RegionNum":1,"RegionCategory":"地球科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Deep learning-based scene classification methods have long been a key research area in remote sensing imagery due to their wide-ranging applications. Recently, Transformer models have achieved significant progress in computer vision, making vision transformers (ViTs) a promising direction for scene classification. However, the spatial complexity of remote sensing imagery poses unique challenges for applying Transformers directly. Manually designing Transformers tailored for remote sensing scene classification is time-consuming under model parameter constraints and requires extensive domain expertise. To address this challenge, neural architecture search (NAS) methods provide an effective solution to construct optimal Transformer architectures for remote sensing scene classification automatically. In this work, we propose SceneFormer, an automated Transformer architecture search framework tailored for scene classification tasks. In SceneFormer, we construct a dedicated search space to search for the optimal Transformer. Moreover, we design a supernet training strategy to train numerous candidate architectures within the search space simultaneously. Furthermore, SceneFormer employs the evolutionary search to find the optimal Transformer architecture under specific resource constraints. Experiments on three high-spatial-resolution (HSR) datasets demonstrate the effectiveness of SceneFormer.
期刊介绍:
IEEE Transactions on Geoscience and Remote Sensing (TGRS) is a monthly publication that focuses on the theory, concepts, and techniques of science and engineering as applied to sensing the land, oceans, atmosphere, and space; and the processing, interpretation, and dissemination of this information.