{"title":"Work-in-Progress: Cooperative MLP-Mixer Networks Inference On Heterogeneous Edge Devices through Partition and Fusion","authors":"Yiming Li, Shouzhen Gu, Mingsong Chen","doi":"10.1109/CASES55004.2022.00021","DOIUrl":null,"url":null,"abstract":"As a newly proposed DNN architecture, MLP-Mixer is attracting increasing attention due to its competitive results compared to CNNs and attention-base networks in various tasks. Although MLP-Mixer only contains MLP layers, it still suffers from high communication costs in edge computing scenarios, resulting in long inference time. To improve the inference performance of an MLP-Mixer model on correlated resource-constrained heterogeneous edge devices, this paper proposes a novel partition and fusion method specific for MLP-Mixer layers, which can significantly reduce the communication costs. Experimental results show that, when the number of devices increases from 2 to 6, our partition and fusion method can archive 1.01-1.27x and 1.54-3.12x speedup in scenarios with heterogeneous and homogeneous devices, respectively.","PeriodicalId":331181,"journal":{"name":"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Compilers, Architecture, and Synthesis for Embedded Systems (CASES)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CASES55004.2022.00021","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
As a newly proposed DNN architecture, MLP-Mixer is attracting increasing attention due to its competitive results compared to CNNs and attention-base networks in various tasks. Although MLP-Mixer only contains MLP layers, it still suffers from high communication costs in edge computing scenarios, resulting in long inference time. To improve the inference performance of an MLP-Mixer model on correlated resource-constrained heterogeneous edge devices, this paper proposes a novel partition and fusion method specific for MLP-Mixer layers, which can significantly reduce the communication costs. Experimental results show that, when the number of devices increases from 2 to 6, our partition and fusion method can archive 1.01-1.27x and 1.54-3.12x speedup in scenarios with heterogeneous and homogeneous devices, respectively.