{"title":"EMAS: Efficient Meta Architecture Search for Few-Shot Learning","authors":"Dongkai Liu, Jiaxing Li, Honglong Chen, Baodi Liu, Xiaoping Lu, Weifeng Liu","doi":"10.1109/ICTAI56018.2022.00099","DOIUrl":null,"url":null,"abstract":"With the progress of few-shot learning, it has been scaled to many domains in the real world which have few labeled data, such as image classification and object detection. Many efforts for data embedding and feature combination have been made by designing a fixed neural architecture that can also be extended to variable and adaptive neural architectures for better performance. Recent works leverage neural architecture search technique to automatically design networks for few-shot learning but it requires vast computation costs and GPU memory requirements. This work introduces EMAS, an efficient method to speed up the searching process for few-shot learning. Specifically, we build a supernet to combine all candidate operations and then adopt gradient-based methods to search. Instead of training the whole supernet, we adopt Gumbel reparameterization technique to sample and activate a small subset of operations. EMAS handles a single path in a novel task adapted with just a few steps and time. A novel task only needs to learn fewer parameters and compute less content. During meta-testing, the task can well adapt to the network architecture although only with a few iterations. Empirical results show that EMAS yields a fair improvement in accuracy on the standard few-shot classification benchmark and is five times smaller in time.","PeriodicalId":354314,"journal":{"name":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 34th International Conference on Tools with Artificial Intelligence (ICTAI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICTAI56018.2022.00099","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
With the progress of few-shot learning, it has been scaled to many domains in the real world which have few labeled data, such as image classification and object detection. Many efforts for data embedding and feature combination have been made by designing a fixed neural architecture that can also be extended to variable and adaptive neural architectures for better performance. Recent works leverage neural architecture search technique to automatically design networks for few-shot learning but it requires vast computation costs and GPU memory requirements. This work introduces EMAS, an efficient method to speed up the searching process for few-shot learning. Specifically, we build a supernet to combine all candidate operations and then adopt gradient-based methods to search. Instead of training the whole supernet, we adopt Gumbel reparameterization technique to sample and activate a small subset of operations. EMAS handles a single path in a novel task adapted with just a few steps and time. A novel task only needs to learn fewer parameters and compute less content. During meta-testing, the task can well adapt to the network architecture although only with a few iterations. Empirical results show that EMAS yields a fair improvement in accuracy on the standard few-shot classification benchmark and is five times smaller in time.