Marihan Amein, Zhuoran Xiong, Olivier Therrien, B. Meyer, W. Gross
{"title":"正在进行的工作:SuperNAS:语义分割的快速多目标超级网络架构搜索","authors":"Marihan Amein, Zhuoran Xiong, Olivier Therrien, B. Meyer, W. Gross","doi":"10.1109/CASES55004.2022.00024","DOIUrl":null,"url":null,"abstract":"We present SuperNAS, a fast multi-objective neural architecture search framework for semantic segmentation. SuperNAS subsamples the structure and pre-trained parameters of DeepLabV3+, without fine-tuning, dramatically reducing training time during search. To further reduce candidate evaluation time, we use a subset of the validation dataset during search. Only the final, Pareto-dominant, candidates are ultimately fine-tuned using the complete training set. We evaluate SuperNAS by searching for models that effectively trade accuracy and computational cost on the PASCAL VOC 2012 dataset. SuperNAS finds competitive designs quickly, e.g., taking just 0.5 GPU days to discover a DeepLabV3+ variant that reduces FLOPs and parameters by 10% and 20% respectively, for less than 3% increased error.","PeriodicalId":331181,"journal":{"name":"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Work-in-Progress: SuperNAS: Fast Multi-Objective SuperNet Architecture Search for Semantic Segmentation\",\"authors\":\"Marihan Amein, Zhuoran Xiong, Olivier Therrien, B. Meyer, W. Gross\",\"doi\":\"10.1109/CASES55004.2022.00024\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present SuperNAS, a fast multi-objective neural architecture search framework for semantic segmentation. SuperNAS subsamples the structure and pre-trained parameters of DeepLabV3+, without fine-tuning, dramatically reducing training time during search. To further reduce candidate evaluation time, we use a subset of the validation dataset during search. Only the final, Pareto-dominant, candidates are ultimately fine-tuned using the complete training set. We evaluate SuperNAS by searching for models that effectively trade accuracy and computational cost on the PASCAL VOC 2012 dataset. SuperNAS finds competitive designs quickly, e.g., taking just 0.5 GPU days to discover a DeepLabV3+ variant that reduces FLOPs and parameters by 10% and 20% respectively, for less than 3% increased error.\",\"PeriodicalId\":331181,\"journal\":{\"name\":\"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)\",\"volume\":\"23 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CASES55004.2022.00024\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASES55004.2022.00024","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Work-in-Progress: SuperNAS: Fast Multi-Objective SuperNet Architecture Search for Semantic Segmentation
We present SuperNAS, a fast multi-objective neural architecture search framework for semantic segmentation. SuperNAS subsamples the structure and pre-trained parameters of DeepLabV3+, without fine-tuning, dramatically reducing training time during search. To further reduce candidate evaluation time, we use a subset of the validation dataset during search. Only the final, Pareto-dominant, candidates are ultimately fine-tuned using the complete training set. We evaluate SuperNAS by searching for models that effectively trade accuracy and computational cost on the PASCAL VOC 2012 dataset. SuperNAS finds competitive designs quickly, e.g., taking just 0.5 GPU days to discover a DeepLabV3+ variant that reduces FLOPs and parameters by 10% and 20% respectively, for less than 3% increased error.