Yujie Peng, Xiaoqin Song, F. Liu, Guoliang Xing, Tiecheng Song
{"title":"移动边缘网络中延迟敏感业务的联合任务划分与计算卸载","authors":"Yujie Peng, Xiaoqin Song, F. Liu, Guoliang Xing, Tiecheng Song","doi":"10.1109/MSN57253.2022.00042","DOIUrl":null,"url":null,"abstract":"With the development of Internet of Things (IoT), wireless communication networks and Artificial Intelligence (AI), more and more real-time applications such as online games and autonomous driving have emerged. However, due to limited computing power and battery capacity, it has become increasingly difficult for local user devices to take on the full range of computing tasks under tight timing constraints. The emerging Mobile Edge Computing (MEC) technology is widely considered to be an important technology for achieving ultra-low latency. However, most of the existing work is focused on non-splittable computation tasks. In fact, data partitioning-oriented applications can be split into multiple subtasks for parallel processing. In this paper, we study the partial computation offloading of multiple detachable tasks in MEC networks, focusing on minimizing the total user device latency in the multi-MEC multi-user scenarios. Considering the dynamic partitioning of tasks, we adopt the barrel theory to construct a linear system of equations to find the optimal solutions and propose an approach for distributed computation offloading based on numerical methods. The simulation results show that the proposed algorithm can reduce the average user device latency by 31 % compared with the binary offloading method.","PeriodicalId":114459,"journal":{"name":"2022 18th International Conference on Mobility, Sensing and Networking (MSN)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Joint Task Partition and Computation Offloading for Latency-Sensitive Services in Mobile Edge Networks\",\"authors\":\"Yujie Peng, Xiaoqin Song, F. Liu, Guoliang Xing, Tiecheng Song\",\"doi\":\"10.1109/MSN57253.2022.00042\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"With the development of Internet of Things (IoT), wireless communication networks and Artificial Intelligence (AI), more and more real-time applications such as online games and autonomous driving have emerged. However, due to limited computing power and battery capacity, it has become increasingly difficult for local user devices to take on the full range of computing tasks under tight timing constraints. The emerging Mobile Edge Computing (MEC) technology is widely considered to be an important technology for achieving ultra-low latency. However, most of the existing work is focused on non-splittable computation tasks. In fact, data partitioning-oriented applications can be split into multiple subtasks for parallel processing. In this paper, we study the partial computation offloading of multiple detachable tasks in MEC networks, focusing on minimizing the total user device latency in the multi-MEC multi-user scenarios. Considering the dynamic partitioning of tasks, we adopt the barrel theory to construct a linear system of equations to find the optimal solutions and propose an approach for distributed computation offloading based on numerical methods. The simulation results show that the proposed algorithm can reduce the average user device latency by 31 % compared with the binary offloading method.\",\"PeriodicalId\":114459,\"journal\":{\"name\":\"2022 18th International Conference on Mobility, Sensing and Networking (MSN)\",\"volume\":\"30 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 18th International Conference on Mobility, Sensing and Networking (MSN)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/MSN57253.2022.00042\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 18th International Conference on Mobility, Sensing and Networking (MSN)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MSN57253.2022.00042","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Joint Task Partition and Computation Offloading for Latency-Sensitive Services in Mobile Edge Networks
With the development of Internet of Things (IoT), wireless communication networks and Artificial Intelligence (AI), more and more real-time applications such as online games and autonomous driving have emerged. However, due to limited computing power and battery capacity, it has become increasingly difficult for local user devices to take on the full range of computing tasks under tight timing constraints. The emerging Mobile Edge Computing (MEC) technology is widely considered to be an important technology for achieving ultra-low latency. However, most of the existing work is focused on non-splittable computation tasks. In fact, data partitioning-oriented applications can be split into multiple subtasks for parallel processing. In this paper, we study the partial computation offloading of multiple detachable tasks in MEC networks, focusing on minimizing the total user device latency in the multi-MEC multi-user scenarios. Considering the dynamic partitioning of tasks, we adopt the barrel theory to construct a linear system of equations to find the optimal solutions and propose an approach for distributed computation offloading based on numerical methods. The simulation results show that the proposed algorithm can reduce the average user device latency by 31 % compared with the binary offloading method.