Sharmen Akhter, Md. Imtiaz Hossain, M. Hossain, C. Hong, E. Huh
{"title":"paret:移动边缘计算设备的单一阻塞网络","authors":"Sharmen Akhter, Md. Imtiaz Hossain, M. Hossain, C. Hong, E. Huh","doi":"10.1109/ICOIN56518.2023.10048960","DOIUrl":null,"url":null,"abstract":"Nowadays, deep learning-based approaches have achieved significant performances in diverse applications. However, due to having millions of parameters and higher complexity, these high-performing architectures are not suitable to be deployed in edge devices, the Internet of Things (IoT), Vehicular edge computing, and microservices-based real-time applications. Though numerous approaches have proposed lightweight architectures to reduce required computational resources, there are still some concerns about latency, execution, and response time. To the best of our knowledge, no prior works have considered reorganizing the sequential blocks into parallel forward propagation i.e, converting sequential forward propagation into parallel forward propagation. In this paper, instead of reducing the time required by the network for end-to-end sequential execution, we propose a novel technique to obtain a parallel network called ParaNet to minimize the execution time by paralleling the network. Firstly, we dissect a CNN block-wise where all the blocks are deployed parallelly to construct ParaNet. Each block is treated as an individual network and can be deployed into different low computational edge devices for parallel processing. To further improve the performances we deploy the knowledge distillation technique into each ParaNet version. Our proposed method offers state-of-the-art results using low computational resources with very low execution delay compared to the corresponding baseline architectures. Our extensive analysis and results express the superiority of the ParaNet regarding both accuracy and execution time.","PeriodicalId":285763,"journal":{"name":"2023 International Conference on Information Networking (ICOIN)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"ParaNet: A Single Blocked Network for Mobile Edge Computing Devices\",\"authors\":\"Sharmen Akhter, Md. Imtiaz Hossain, M. Hossain, C. Hong, E. Huh\",\"doi\":\"10.1109/ICOIN56518.2023.10048960\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Nowadays, deep learning-based approaches have achieved significant performances in diverse applications. However, due to having millions of parameters and higher complexity, these high-performing architectures are not suitable to be deployed in edge devices, the Internet of Things (IoT), Vehicular edge computing, and microservices-based real-time applications. Though numerous approaches have proposed lightweight architectures to reduce required computational resources, there are still some concerns about latency, execution, and response time. To the best of our knowledge, no prior works have considered reorganizing the sequential blocks into parallel forward propagation i.e, converting sequential forward propagation into parallel forward propagation. In this paper, instead of reducing the time required by the network for end-to-end sequential execution, we propose a novel technique to obtain a parallel network called ParaNet to minimize the execution time by paralleling the network. Firstly, we dissect a CNN block-wise where all the blocks are deployed parallelly to construct ParaNet. Each block is treated as an individual network and can be deployed into different low computational edge devices for parallel processing. To further improve the performances we deploy the knowledge distillation technique into each ParaNet version. Our proposed method offers state-of-the-art results using low computational resources with very low execution delay compared to the corresponding baseline architectures. Our extensive analysis and results express the superiority of the ParaNet regarding both accuracy and execution time.\",\"PeriodicalId\":285763,\"journal\":{\"name\":\"2023 International Conference on Information Networking (ICOIN)\",\"volume\":\"50 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-01-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 International Conference on Information Networking (ICOIN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICOIN56518.2023.10048960\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 International Conference on Information Networking (ICOIN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICOIN56518.2023.10048960","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
ParaNet: A Single Blocked Network for Mobile Edge Computing Devices
Nowadays, deep learning-based approaches have achieved significant performances in diverse applications. However, due to having millions of parameters and higher complexity, these high-performing architectures are not suitable to be deployed in edge devices, the Internet of Things (IoT), Vehicular edge computing, and microservices-based real-time applications. Though numerous approaches have proposed lightweight architectures to reduce required computational resources, there are still some concerns about latency, execution, and response time. To the best of our knowledge, no prior works have considered reorganizing the sequential blocks into parallel forward propagation i.e, converting sequential forward propagation into parallel forward propagation. In this paper, instead of reducing the time required by the network for end-to-end sequential execution, we propose a novel technique to obtain a parallel network called ParaNet to minimize the execution time by paralleling the network. Firstly, we dissect a CNN block-wise where all the blocks are deployed parallelly to construct ParaNet. Each block is treated as an individual network and can be deployed into different low computational edge devices for parallel processing. To further improve the performances we deploy the knowledge distillation technique into each ParaNet version. Our proposed method offers state-of-the-art results using low computational resources with very low execution delay compared to the corresponding baseline architectures. Our extensive analysis and results express the superiority of the ParaNet regarding both accuracy and execution time.