{"title":"End-to-end multi-granulation causality extraction model","authors":"Miao Wu , Qinghua Zhang , Chengying Wu , Guoyin Wang","doi":"10.1016/j.dcan.2023.02.005","DOIUrl":"10.1016/j.dcan.2023.02.005","url":null,"abstract":"<div><div>Causality extraction has become a crucial task in natural language processing and knowledge graph. However, most existing methods divide causality extraction into two subtasks: extraction of candidate causal pairs and classification of causality. These methods result in cascading errors and the loss of associated contextual information. Therefore, in this study, based on graph theory, an <strong>E</strong>nd-to-end <strong>M</strong>ulti-<strong>G</strong>ranulation <strong>C</strong>ausality <strong>E</strong>xtraction model (EMGCE) is proposed to extract explicit causality and directly mine implicit causality. First, the sentences are represented on different granulation layers, that contain character, word, and contextual string layers. The word layer is fine-grained into three layers: word-index, word-embedding and word-position-embedding layers. Then, a granular causality tree of dataset is built based on the word-index layer. Next, an improved tagREtriplet algorithm is designed to obtain the labeled causality based on the granular causality tree. It can transform the task into a sequence labeling task. Subsequently, the multi-granulation semantic representation is fed into the neural network model to extract causality. Finally, based on the extended public SemEval 2010 Task 8 dataset, the experimental results demonstrate that EMGCE is effective.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1864-1873"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46773829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"High-throughput SatCom-on-the-move antennas: Technical overview and state-of-the-art","authors":"Yuanzhi He , Fan Yang , Guodong Han , Yuanyuan Li","doi":"10.1016/j.dcan.2023.11.005","DOIUrl":"10.1016/j.dcan.2023.11.005","url":null,"abstract":"<div><div>With the rapid development of satellite communications, satellite antennas attract growing interest, especially the high-throughput SatCom-on-the-move antenna for providing high-speed connectivity in a mobile environment. While conventional antennas, such as parabolic dishes and planar waveguide arrays, enjoy a growing market, new kinds of antennas keep on emerging to meet diversified requirements in various satellite communication scenarios. This paper first introduces the design requirements, categories, and evolutions of SatCom-on-the-move antennas, and then discussed representative designs of mechanical scanning antennas and electronic scanning antennas, including their structures, principles, characteristics, and limitations in practical applications. Given the new requirements of satellite communications, this paper also highlighted some of the latest progress in this field, including the Monolithic Microwave Integrated Circuit (MMIC)-based phased array antenna, the metasurface-based phased array antenna, and their hybrid versions. Finally, some critical challenges facing future antennas are discussed. It is believed that it's necessary to put concerted efforts from antenna, microwave, and material communities, etc., to advance SatCom-on-the-move antennas for the upcoming era of satellite communication.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1760-1768"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139293435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Game-theoretic private blockchain design in edge computing networks","authors":"Daoqi Han , Yang Liu , Fangwei Zhang , Yueming Lu","doi":"10.1016/j.dcan.2023.12.001","DOIUrl":"10.1016/j.dcan.2023.12.001","url":null,"abstract":"<div><div>Considering the privacy challenges of secure storage and controlled flow, there is an urgent need to realize a decentralized ecosystem of private blockchain for cyberspace. A collaboration dilemma arises when the participants are self-interested and lack feedback of complete information. Traditional blockchains have similar faults, such as trustlessness, single-factor consensus, and heavily distributed ledger, preventing them from adapting to the heterogeneous and resource-constrained Internet of Things. In this paper, we develop the game-theoretic design of a two-sided rating with complete information feedback to stimulate collaborations for private blockchain. The design consists of an evolution strategy of the decision-making network and a computing power network for continuously verifiable proofs. We formulate the optimum rating and resource scheduling problems as two-stage iterative games between participants and leaders. We theoretically prove that the Stackelberg equilibrium exists and the group evolution is stable. Then, we propose a multi-stage evolution consensus with feedback on a block-accounting workload for metadata survival. To continuously validate a block, the metadata of the optimum rating, privacy, and proofs are extracted to store on a lightweight blockchain. Moreover, to increase resource utilization, surplus computing power is scheduled flexibly to enhance security by degrees. Finally, the evaluation results show the validity and efficiency of our model, thereby solving the collaboration dilemma in the private blockchain.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1622-1634"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139878060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Uchechukwu Awada , Jiankang Zhang , Sheng Chen , Shuangzhi Li , Shouyi Yang
{"title":"Collaborative learning-based inter-dependent task dispatching and co-location in an integrated edge computing system","authors":"Uchechukwu Awada , Jiankang Zhang , Sheng Chen , Shuangzhi Li , Shouyi Yang","doi":"10.1016/j.dcan.2024.08.002","DOIUrl":"10.1016/j.dcan.2024.08.002","url":null,"abstract":"<div><div>Recently, several edge deployment types, such as on-premise edge clusters, Unmanned Aerial Vehicles (UAV)-attached edge devices, telecommunication base stations installed with edge clusters, etc., are being deployed to enable faster response time for latency-sensitive tasks. One fundamental problem is where and how to offload and schedule multi-dependent tasks so as to minimize their collective execution time and to achieve high resource utilization. Existing approaches randomly dispatch tasks naively to available edge nodes without considering the resource demands of tasks, inter-dependencies of tasks and edge resource availability. These approaches can result in the longer waiting time for tasks due to insufficient resource availability or dependency support, as well as provider lock-in. Therefore, we present <em>EdgeColla</em>, which is based on the integration of edge resources running across multi-edge deployments. <em>EdgeColla</em> leverages <em>learning</em> techniques to intelligently <em>dispatch</em> multi-dependent tasks, and a variant bin-packing optimization method to <em>co-locate</em> these tasks firmly on available nodes to optimally utilize them. Extensive experiments on real-world datasets from Alibaba on task dependencies show that our approach can achieve optimal performance than the baseline schemes.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1837-1850"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143313063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuhong Xie , Yuan Zhang , Tao Lin , Zipeng Pan , Si-Ze Qian , Bo Jiang , Jinyao Yan
{"title":"Short video preloading via domain knowledge assisted deep reinforcement learning","authors":"Yuhong Xie , Yuan Zhang , Tao Lin , Zipeng Pan , Si-Ze Qian , Bo Jiang , Jinyao Yan","doi":"10.1016/j.dcan.2024.01.006","DOIUrl":"10.1016/j.dcan.2024.01.006","url":null,"abstract":"<div><div>Short video applications like TikTok have seen significant growth in recent years. One common behavior of users on these platforms is watching and swiping through videos, which can lead to a significant waste of bandwidth. As such, an important challenge in short video streaming is to design a preloading algorithm that can effectively decide which videos to download, at what bitrate, and when to pause the download in order to reduce bandwidth waste while improving the Quality of Experience (QoE). However, designing such an algorithm is non-trivial, especially when considering the conflicting objectives of minimizing bandwidth waste and maximizing QoE. In this paper, we propose an end-to-end <strong>D</strong>eep reinforcement learning framework with <strong>A</strong>ction <strong>M</strong>asking called DAM that leverages domain knowledge to learn an optimal policy for short video preloading. To achieve this, we introduce a reward shaping technique to minimize bandwidth waste and use action masking to make actions more reasonable, reduce playback rebuffering, and accelerate the training process. We have conducted extensive experiments using real-world video datasets and network traces including 4G/WiFi/5G. Our results show that DAM improves the QoE score by 3.73%-11.28% compared to state-of-the-art algorithms, and achieves an average bandwidth waste of only 10.27%-12.07%, outperforming all baseline methods.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1826-1836"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"UAV-assisted MEC offloading strategy with peak AOI boundary optimization: A method based on DDQN","authors":"Zhixiong Chen , Jiawei Yang , Zhenyu Zhou","doi":"10.1016/j.dcan.2024.01.003","DOIUrl":"10.1016/j.dcan.2024.01.003","url":null,"abstract":"<div><div>In response to the requirements for large-scale device access and ultra-reliable and low-latency communication in the power internet of things, unmanned aerial vehicle-assisted multi-access edge computing can be used to realize flexible access to power services and update large amounts of information in a timely manner. By considering factors such as machine communication traffic, MAC competition access, and information freshness, this paper develops a cross-layer computing framework in which the peak Age of Information (AoI) provides a statistical delay boundary in the finite blocklength regime. We also propose a deep machine learning-based multi-access edge computing offloading algorithm. First, a traffic arrival model is established in which the time interval follows the Beta distribution, and then a business service model is proposed based on the carrier sense multiple access with collision avoidance algorithm. The peak AoI boundary performance of multiple access is evaluated according to stochastic network calculus theory. Finally, an unmanned aerial vehicle-assisted multi-level offloading model with cache is designed, in which the peak AoI violation probability and energy consumption provide the optimization goals. The optimal offloading strategy is obtained using deep reinforcement learning. Compared with baseline schemes based on non-cooperative game theory with stochastic learning automata and random edge unloading, the proposed algorithm improves the overall performance by approximately 3.52 % and 20.73 %, respectively, and provides superior deterministic offloading performance by using the peak AoI boundary.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1790-1803"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139827691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiuping Zhang , Sheng Sun , Junjie Luo , Min Liu , Zhongcheng Li , Huan Yang , Yuwei Wang
{"title":"Collaborative non-chain DNN inference with multi-device based on layer parallel","authors":"Qiuping Zhang , Sheng Sun , Junjie Luo , Min Liu , Zhongcheng Li , Huan Yang , Yuwei Wang","doi":"10.1016/j.dcan.2023.11.004","DOIUrl":"10.1016/j.dcan.2023.11.004","url":null,"abstract":"<div><div>Various intelligent applications based on non-chain DNN models are widely used in Internet of Things (IoT) scenarios. However, resource-constrained IoT devices usually cannot afford the heavy computation burden and cannot guarantee the strict inference latency requirements of non-chain DNN models. Multi-device collaboration has become a promising paradigm for achieving inference acceleration. However, existing works neglect the possibility of inter-layer parallel execution, which fails to exploit the parallelism of collaborating devices and inevitably prolongs the overall completion latency. Thus, there is an urgent need to pay attention to the issue of non-chain DNN inference acceleration with multi-device collaboration based on inter-layer parallel. Three major challenges to be overcome in this problem include exponential computational complexity, complicated layer dependencies, and intractable execution location selection. To this end, we propose a Topological Sorting Based Bidirectional Search (TSBS) algorithm that can adaptively partition non-chain DNN models and select suitable execution locations at layer granularity. More specifically, the TSBS algorithm consists of a topological sorting subalgorithm to realize parallel execution with low computational complexity under complicated layer parallel constraints, and a bidirectional search subalgorithm to quickly find the suitable execution locations for non-parallel layers. Extensive experiments show that the TSBS algorithm significantly outperforms the state-of-the-arts in the completion latency of non-chain DNN inference, a reduction of up to 22.69%.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1748-1759"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139297653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhigang Du , Sunxuan Zhang , Zijia Yao , Zhenyu Zhou , Muhammad Tariq
{"title":"Attack-detection and multi-clock source cooperation-based accurate time synchronization for PLC-AIoT in smart parks","authors":"Zhigang Du , Sunxuan Zhang , Zijia Yao , Zhenyu Zhou , Muhammad Tariq","doi":"10.1016/j.dcan.2023.10.005","DOIUrl":"10.1016/j.dcan.2023.10.005","url":null,"abstract":"<div><div>Power Line Communications-Artificial Intelligence of Things (PLC-AIoT) combines the low cost and high coverage of PLC with the learning ability of Artificial Intelligence (AI) to provide data collection and transmission capabilities for PLC-AIoT devices in smart parks. With the development of smart parks, their emerging services require secure and accurate time synchronization of PLC-AIoT devices. However, the impact of attackers on the accuracy of time synchronization cannot be ignored. To solve the aforementioned problems, we propose a tampering attack-aware Deep Q-Network (DQN)-based time synchronization algorithm. First, we construct an abnormal clock source detection model. Then, the abnormal clock source is detected and excluded by comparing the time synchronization information between the device and the gateway. Finally, the proposed algorithm realizes the joint guarantee of high accuracy and low delay for PLC-AIoT in smart parks by intelligently selecting the multi-clock source cooperation strategy and timing weights. Simulation results show that the proposed algorithm has better time synchronization delay and accuracy performance.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1732-1740"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136152251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Digital twin empowered lightweight and efficient blockchain for dynamic internet of vehicles","authors":"Haoye Chai , Supeng Leng , Jianhua He , Ke Zhang","doi":"10.1016/j.dcan.2023.08.004","DOIUrl":"10.1016/j.dcan.2023.08.004","url":null,"abstract":"<div><div>The Internet of Vehicles (IoV) has great potential for Intelligent Transportation Systems (ITS), enabling interactive vehicle applications, such as advanced driving and infotainment. It is crucial to ensure the reliability during the vehicle-to-vehicle interaction process. Although the emerging blockchain has superiority in handling security-related issues, existing blockchain-based schemes show weakness in highly dynamic IoV. Both the transaction broadcast and consensus process require multiple rounds of communication throughout the whole network, while the high relative speed between vehicles and dynamic topology resulting in the intermittent connections will degrade the efficiency of blockchain. In this paper, we propose a Digital Twin (DT)-enabled blockchain framework for dynamic IoV, which aims to reduce both the communication cost and the operational latency of blockchain. To address the dynamic context, we propose a DT construction strategy that jointly considers the DT migration and blockchain computing consumption. Moreover, a communication-efficient Local Perceptual Multi-Agent Deep Deterministic Policy Gradient (LPMA-DDPG) algorithm is designed to execute the DT construction strategy among edge servers in a decentralized manner. The simulation results show that the proposed framework can greatly reduce the communication cost, while achieving good security performance. The dynamic DT construction strategy shows superiority in operation latency compared with benchmark strategies. The decentralized LPMA-DDPG algorithm is helpful for implementing the optimal DT construction strategy in practical ITS.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1698-1707"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46508635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A novel routing method for dynamic control in distributed computing power networks","authors":"Lujie Guo, Fengxian Guo, Mugen Peng","doi":"10.1016/j.dcan.2024.02.006","DOIUrl":"10.1016/j.dcan.2024.02.006","url":null,"abstract":"<div><div>Driven by diverse intelligent applications, computing capability is moving from the central cloud to the edge of the network in the form of small cloud nodes, forming a distributed computing power network. Tasked with both packet transmission and data processing, it requires joint optimization of communications and computing. Considering the diverse requirements of applications, we develop a dynamic control policy of routing to determine both paths and computing nodes in a distributed computing power network. Different from traditional routing protocols, additional metrics related to computing are taken into consideration in the proposed policy. Based on the multi-attribute decision theory and the fuzzy logic theory, we propose two routing selection algorithms, the Fuzzy Logic-Based Routing (FLBR) algorithm and the low-complexity Pairwise Multi-Attribute Decision-Making (<em>l</em>PMADM) algorithm. Simulation results show that the proposed policy could achieve better performance in average processing delay, user satisfaction, and load balancing compared with existing works.</div></div>","PeriodicalId":48631,"journal":{"name":"Digital Communications and Networks","volume":"10 6","pages":"Pages 1644-1652"},"PeriodicalIF":7.5,"publicationDate":"2024-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140270502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}