{"title":"物联网网络中工业4.0应用的ai联合新型延迟感知链路调度","authors":"Suvarna Patil, Prasad Gokhale","doi":"10.1108/ijpcc-12-2021-0297","DOIUrl":null,"url":null,"abstract":"\nPurpose\nWith the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.\n\n\nDesign/methodology/approach\nWith an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.\n\n\nFindings\nOur proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.\n\n\nOriginality/value\nThe concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.\n","PeriodicalId":43952,"journal":{"name":"International Journal of Pervasive Computing and Communications","volume":null,"pages":null},"PeriodicalIF":0.6000,"publicationDate":"2022-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks\",\"authors\":\"Suvarna Patil, Prasad Gokhale\",\"doi\":\"10.1108/ijpcc-12-2021-0297\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"\\nPurpose\\nWith the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.\\n\\n\\nDesign/methodology/approach\\nWith an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.\\n\\n\\nFindings\\nOur proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.\\n\\n\\nOriginality/value\\nThe concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.\\n\",\"PeriodicalId\":43952,\"journal\":{\"name\":\"International Journal of Pervasive Computing and Communications\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.6000,\"publicationDate\":\"2022-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Pervasive Computing and Communications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1108/ijpcc-12-2021-0297\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Pervasive Computing and Communications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1108/ijpcc-12-2021-0297","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
AI-federated novel delay-aware link-scheduling for Industry 4.0 applications in IoT networks
Purpose
With the advent of AI-federated technologies, it is feasible to perform complex tasks in industrial Internet of Things (IIoT) environment by enhancing throughput of the network and by reducing the latency of transmitted data. The communications in IIoT and Industry 4.0 requires handshaking of multiple technologies for supporting heterogeneous networks and diverse protocols. IIoT applications may gather and analyse sensor data, allowing operators to monitor and manage production systems, resulting in considerable performance gains in automated processes. All IIoT applications are responsible for generating a vast set of data based on diverse characteristics. To obtain an optimum throughput in an IIoT environment requires efficiently processing of IIoT applications over communication channels. Because computing resources in the IIoT are limited, equitable resource allocation with the least amount of delay is the need of the IIoT applications. Although some existing scheduling strategies address delay concerns, faster transmission of data and optimal throughput should also be addressed along with the handling of transmission delay. Hence, this study aims to focus on a fair mechanism to handle throughput, transmission delay and faster transmission of data. The proposed work provides a link-scheduling algorithm termed as delay-aware resource allocation that allocates computing resources to computational-sensitive tasks by reducing overall latency and by increasing the overall throughput of the network. First of all, a multi-hop delay model is developed with multistep delay prediction using AI-federated neural network long–short-term memory (LSTM), which serves as a foundation for future design. Then, link-scheduling algorithm is designed for data routing in an efficient manner. The extensive experimental results reveal that the average end-to-end delay by considering processing, propagation, queueing and transmission delays is minimized with the proposed strategy. Experiments show that advances in machine learning have led to developing a smart, collaborative link scheduling algorithm for fairness-driven resource allocation with minimal delay and optimal throughput. The prediction performance of AI-federated LSTM is compared with the existing approaches and it outperforms over other techniques by achieving 98.2% accuracy.
Design/methodology/approach
With an increase of IoT devices, the demand for more IoT gateways has increased, which increases the cost of network infrastructure. As a result, the proposed system uses low-cost intermediate gateways in this study. Each gateway may use a different communication technology for data transmission within an IoT network. As a result, gateways are heterogeneous, with hardware support limited to the technologies associated with the wireless sensor networks. Data communication fairness at each gateway is achieved in an IoT network by considering dynamic IoT traffic and link-scheduling problems to achieve effective resource allocation in an IoT network. The two-phased solution is provided to solve these problems for improved data communication in heterogeneous networks achieving fairness. In the first phase, traffic is predicted using the LSTM network model to predict the dynamic traffic. In the second phase, efficient link selection per technology and link scheduling are achieved based on predicted load, the distance between gateways, link capacity and time required as per different technologies supported such as Bluetooth, Wi-Fi and Zigbee. It enhances data transmission fairness for all gateways, resulting in more data transmission achieving maximum throughput. Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation.
Findings
Our proposed approach outperforms by achieving maximum network throughput, and less packet delay is demonstrated using simulation. It also shows that AI- and IoT-federated devices can communicate seamlessly over IoT networks in Industry 4.0.
Originality/value
The concept is a part of the original research work and can be adopted by Industry 4.0 for easy and seamless connectivity of AI and IoT-federated devices.