IEEE Transactions on Computers最新文献

筛选
英文 中文
BlockCompass: A Benchmarking Platform for Blockchain Performance BlockCompass:区块链性能基准平台
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-22 DOI: 10.1109/TC.2024.3404103
Mohammadreza Rasolroveicy;Wejdene Haouari;Marios Fokaefs
{"title":"BlockCompass: A Benchmarking Platform for Blockchain Performance","authors":"Mohammadreza Rasolroveicy;Wejdene Haouari;Marios Fokaefs","doi":"10.1109/TC.2024.3404103","DOIUrl":"10.1109/TC.2024.3404103","url":null,"abstract":"Blockchain technology has gained momentum due to its immutability and transparency. Several blockchain platforms, each with different consensus protocols, have been proposed. However, choosing and configuring such a platform is a non-trivial task. Numerous benchmarking tools have been introduced to test the performance of blockchain solutions. Yet, these tools are often limited to specific blockchain platforms or require complex configurations. Moreover, they tend to focus on one-off batch evaluation models, which may not be ideal for longer-running instances under continuous workloads. In this work, we present \u0000<italic>BlockCompass</i>\u0000, an all-inclusive blockchain benchmarking tool that can be easily configured and extended. We demonstrate how \u0000<italic>BlockCompass</i>\u0000 can evaluate the performance of various blockchain platforms and configurations, including Ethereum Proof-of-Authority, Ethereum Proof-of-Work, Hyperledger Fabric Raft, Hyperledger Sawtooth with Proof-of-Elapsed-Time, Practical Byzantine Fault Tolerance, and Raft consensus algorithms, against workloads that continuously fluctuate over time. We show how continuous transactional workloads may be more appropriate than batch workloads in capturing certain stressful events for the system. Finally, we present the results of a usability study about the convenience and effectiveness offered by \u0000<italic>BlockCompass</i>\u0000 in blockchain benchmarking.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 8","pages":"2111-2122"},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Machine Learning-Empowered Cache Management Scheme for High-Performance SSDs 面向高性能固态硬盘的机器学习驱动缓存管理方案
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-22 DOI: 10.1109/TC.2024.3404064
Hui Sun;Chen Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin
{"title":"A Machine Learning-Empowered Cache Management Scheme for High-Performance SSDs","authors":"Hui Sun;Chen Sun;Haoqiang Tong;Yinliang Yue;Xiao Qin","doi":"10.1109/TC.2024.3404064","DOIUrl":"10.1109/TC.2024.3404064","url":null,"abstract":"NAND Flash-based solid-state drives (SSDs) have gained widespread usage in data storage thanks to their exceptional performance and low power consumption. The computational capability of SSDs has been elevated to tackle complex algorithms. Inside an SSD, a DRAM cache for frequently accessed requests reduces response time and write amplification (WA), thereby improving SSD performance and lifetime. Existing caching schemes, based on temporal locality, overlook its variations, which potentially reduces cache hit rates. Some caching schemes bolster performance via flash-aware techniques but at the expense of the cache hit rate. To address these issues, we propose a random forest machine learning \u0000<bold>C</b>\u0000lassifier-empowered \u0000<bold>C</b>\u0000ache scheme named CCache, where I/O requests are classified into critical, intermediate, and non-critical ones according to their access status. After designing a machine learning model to predict these three types of requests, we implement a trie-level linked list to manage the cache placement and replacement. CCache safeguards critical requests for cache service to the greatest extent, while granting the highest priority to evicting request accessed by non-critical requests. CCache – considering chip state when processing non-critical requests – is implemented in an SSD simulator (SSDSim). CCache outperforms the alternative caching schemes, including LRU, CFLRU, LCR, NCache, ML_WP, and CCache_ANN, in terms of response time, WA, erase count, and hit ratio. The performance discrepancy between CCache and the OPT scheme is marginal. For example, CCache reduces the response time of the competitors by up to 41.9% with an average of 16.1%. CCache slashes erase counts by a maximum of 67.4%, with an average of 21.3%. The performance gap between CCache and and OPT is merely 2.0%-3.0%.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 8","pages":"2066-2080"},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DPU-Direct: Unleashing Remote Accelerators via Enhanced RDMA for Disaggregated Datacenters DPU-Direct:通过增强型 RDMA 为分散的数据中心释放远程加速器
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-22 DOI: 10.1109/TC.2024.3404089
Yunkun Liao;Jingya Wu;Wenyan Lu;Xiaowei Li;Guihai Yan
{"title":"DPU-Direct: Unleashing Remote Accelerators via Enhanced RDMA for Disaggregated Datacenters","authors":"Yunkun Liao;Jingya Wu;Wenyan Lu;Xiaowei Li;Guihai Yan","doi":"10.1109/TC.2024.3404089","DOIUrl":"10.1109/TC.2024.3404089","url":null,"abstract":"This paper presents DPU-Direct, an accelerator disaggregation system that connects accelerator nodes (ANs) and CPU nodes (CNs) over a standard Remote Direct Memory Access (RDMA) network. DPU-Direct eliminates the latency introduced by the CPU-based network stack, and PCIe interconnects between network I/O and the accelerator. The DPU-Direct system architecture includes a DPU Wrapper hardware architecture, an RDMA-based Accelerator Access Pattern (RAAP), and a CN-side programming model. The DPU Wrapper connects accelerators directly with the RDMA engine, turning ANs into disaggregation-native devices. The RAAP provides the CN with low-latency and high throughput accelerator semantics based on standard RDMA operations. Our FPGA prototype demonstrates DPU-Direct's efficacy with two proof-of-concept applications: AES encryption and key-value cache, which are computationally intensive and latency-sensitive. DPU-Direct yields a 400x speedup in AES encryption over the CPU baseline and matches the performance of the locally integrated AES accelerator. For key-value cache, DPU-Direct reduces the average end-to-end latency by 1.66x for GETs and 1.30x for SETs over the CPU-RDMA-Polling baseline, reducing latency jitter by over 10x for both operations.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 8","pages":"2081-2095"},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMChain: An Efficient Load-Migratable Beacon-Based Sharding Blockchain System LMChain:基于信标的高效负载迁移分片区块链系统
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-22 DOI: 10.1109/TC.2024.3404057
Dengcheng Hu;Jianrong Wang;Xiulong Liu;Qi Li;Keqiu Li
{"title":"LMChain: An Efficient Load-Migratable Beacon-Based Sharding Blockchain System","authors":"Dengcheng Hu;Jianrong Wang;Xiulong Liu;Qi Li;Keqiu Li","doi":"10.1109/TC.2024.3404057","DOIUrl":"10.1109/TC.2024.3404057","url":null,"abstract":"Sharding is an important technology that utilizes group parallelism to enhance the scalability and performance of blockchain. However, the existing solutions use a historical transaction-based approach to reallocate shards, which cannot handle temporary overload and incurs additional overhead during the reallocation process. To this end, this paper proposes LMChain, an efficient load-migratable beacon-based sharding blockchain system. The primary goal of LMChain is to eliminate reliance on historical transactions and achieve the high performance. Specifically, we redesign the state maintenance data structure in Beacon Shard to effectively manage all account states at the shard level. Then, we innovatively propose a load-migratable transaction processing protocol built upon the new data structure. To mitigate read-write conflicts during the selection of migration transactions, we adopt a novel graph partitioning scheme. We also adopt a relay-based method to handle cross-shard transactions and resolve inter-shard state read-write conflicts. We implement the LMChain prototype and conduct experiments in a real network environment comprising 17 cloud servers. Experimental results show that, compared with state-of-the-art solutions, LMChain effectively reduces the average transaction waiting latency of overloaded transactions by 30% to 48% in different cases within 16 transaction shards, while improving throughput by 3% to 10%.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2178-2191"},"PeriodicalIF":3.6,"publicationDate":"2024-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ToEx: Accelerating Generation Stage of Transformer-Based Language Models via Token-Adaptive Early Exit ToEx:通过令牌自适应早期退出加速基于转换器的语言模型的生成阶段
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-21 DOI: 10.1109/TC.2024.3404051
Myeonggu Kang;Junyoung Park;Hyein Shin;Jaekang Shin;Lee-Sup Kim
{"title":"ToEx: Accelerating Generation Stage of Transformer-Based Language Models via Token-Adaptive Early Exit","authors":"Myeonggu Kang;Junyoung Park;Hyein Shin;Jaekang Shin;Lee-Sup Kim","doi":"10.1109/TC.2024.3404051","DOIUrl":"10.1109/TC.2024.3404051","url":null,"abstract":"Transformer-based language models have recently gained popularity in numerous natural language processing (NLP) applications due to their superior performance compared to traditional algorithms. These models involve two execution stages: summarization and generation. The generation stage accounts for a significant portion of the total execution time due to its auto-regressive property, which necessitates considerable and repetitive off-chip accesses. Consequently, our objective is to minimize off-chip accesses during the generation stage to expedite transformer execution. To achieve the goal, we propose a token-adaptive early exit (ToEx) that generates output tokens using fewer decoders, thereby reducing off-chip accesses for loading weight parameters. Although our approach has the potential to minimize data communication, it brings two challenges: 1) inaccurate self-attention computation, and 2) significant overhead for exit decision. To overcome these challenges, we introduce a methodology that facilitates accurate self-attention by lazily performing computations for previously exited tokens. Moreover, we mitigate the overhead of exit decision by incorporating a lightweight output embedding layer. We also present a hardware design to efficiently support the proposed work. Evaluation results demonstrate that our work can reduce the number of decoders by 2.6\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u0000 on average. Accordingly, it achieves 3.2\u0000<inline-formula><tex-math>$times$</tex-math></inline-formula>\u0000 speedup on average compared to transformer execution without our work.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 9","pages":"2248-2261"},"PeriodicalIF":3.6,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relieving Write Disturbance for Phase Change Memory With RESET-Aware Data Encoding 利用 RESET 感知数据编码缓解相变存储器的写入干扰
IF 3.6 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-21 DOI: 10.1109/TC.2024.3398490
Ronglong Wu;Zhirong Shen;Jianqiang Chen;Chengshuo Zheng;Zhiwei Yang;Jiwu Shu
{"title":"Relieving Write Disturbance for Phase Change Memory With RESET-Aware Data Encoding","authors":"Ronglong Wu;Zhirong Shen;Jianqiang Chen;Chengshuo Zheng;Zhiwei Yang;Jiwu Shu","doi":"10.1109/TC.2024.3398490","DOIUrl":"10.1109/TC.2024.3398490","url":null,"abstract":"The write disturbance (WD) problem is becoming increasingly severe in PCM due to the continuous scaling down of memory technology. Previous studies have attempted to transform WD-vulnerable data patterns of the new data to alleviate the WD problem. However, through a wide spectrum of real-world benchmarks, we have discovered that simply transforming WD-vulnerable data patterns does not proportionally reduce (or may even increase) WD errors. To address this issue, we present \u0000<monospace>ResEnc</monospace>\u0000, a RESET-aware data encoding scheme that reduces RESET operations to mitigate the WD problem in both wordlines and bitlines for PCM. It dynamically establishes a mask word for each block for data encoding and adaptively selects an appropriate encoding granularity based on the diverse write patterns. \u0000<monospace>ResEnc</monospace>\u0000 finally reassigns the mask words of unchanged blocks to changed blocks for exploring a further reduction of WD errors. Extensive experiments show that \u0000<monospace>ResEnc</monospace>\u0000 can reduce 16.8-87.0% of WD errors, shorten 5.6-39.6% of write latency, and save 7.0-43.1% of write energy for PCM.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 8","pages":"1939-1952"},"PeriodicalIF":3.6,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141147601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decentralized Task Offloading in Edge Computing: An Offline-to-Online Reinforcement Learning Approach 边缘计算中的分散任务卸载:离线到在线强化学习方法
IF 3.7 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-19 DOI: 10.1109/TC.2024.3377912
Hongcai Lin;Lei Yang;Hao Guo;Jiannong Cao
{"title":"Decentralized Task Offloading in Edge Computing: An Offline-to-Online Reinforcement Learning Approach","authors":"Hongcai Lin;Lei Yang;Hao Guo;Jiannong Cao","doi":"10.1109/TC.2024.3377912","DOIUrl":"10.1109/TC.2024.3377912","url":null,"abstract":"Decentralized task offloading among cooperative edge nodes has been a promising solution to enhance resource utilization and improve users’ Quality of Experience (QoE) in edge computing. However, current decentralized methods, such as heuristics and game theory-based methods, either optimize greedily or depend on rigid assumptions, failing to adapt to the dynamic edge environment. Existing DRL-based approaches train the model in a simulation and then apply it in practical systems. These methods may perform poorly because of the divergence between the practical system and the simulated environment. Other methods that train and deploy the model directly in real-world systems face a cold-start problem, which will reduce the users’ QoE before the model converges. This paper proposes a novel offline-to-online DRL called (O2O-DRL). It uses the heuristic task logs to warm-start the DRL model offline. However, offline and online data have different distributions, so using offline methods for online fine-tuning will ruin the policy learned offline. To avoid this problem, we use on-policy DRL to fine-tune the model and prevent value overestimation. We evaluate O2O-DRL with other approaches in a simulation and a Kubernetes-based testbed. The performance results show that O2O-DRL outperforms other methods and solves the cold-start problem.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 6","pages":"1603-1615"},"PeriodicalIF":3.7,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140170080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HPDK: A Hybrid PM-DRAM Key-Value Store for High I/O Throughput HPDK:实现高 I/O 吞吐量的混合 PM-DRAM 键值存储器
IF 3.7 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-18 DOI: 10.1109/TC.2024.3377914
Bihui Liu;Zhenyu Ye;Qiao Hu;Yupeng Hu;Yuchong Hu;Yang Xu;Keqin Li
{"title":"HPDK: A Hybrid PM-DRAM Key-Value Store for High I/O Throughput","authors":"Bihui Liu;Zhenyu Ye;Qiao Hu;Yupeng Hu;Yuchong Hu;Yang Xu;Keqin Li","doi":"10.1109/TC.2024.3377914","DOIUrl":"10.1109/TC.2024.3377914","url":null,"abstract":"This paper explores the design of an architecture that replaces Disk with Persistent Memory (PM) to achieve the highest I/O throughput in Log-Structured Merge Tree (LSM-Tree) based key-value stores (KVS). Most existing LSM-Tree based KVSs use PM as an intermediate or smoothing layer, which fails to fully exploit PM's unique advantages to maximize I/O throughput. However, due to PM's distinct characteristics, such as byte addressability and short erasure time, simply replacing existing storage with PM does not yield optimal I/O performance. Furthermore, LSM-Tree based KVSs often face slow read performance. To tackle these challenges, this paper presents HPDK, a hybrid PM-DRAM KVS that combines level compression for LSM-Trees in PM with a B\u0000<inline-formula><tex-math>${}^{+}$</tex-math></inline-formula>\u0000-tree based in-memory search index in DRAM, resulting in high write and read throughput. HPDK also employs a key-value separation design and a live-item rate-based dynamic merge method to reduce the volume of PM writes. We implement and evaluate HPDK using a real PM drive, and our extensive experiments show that HPDK provides 1.25-11.8 and 1.47-36.4 times higher read and write throughput, respectively, compared to other state-of-the-art LSM-Tree based approaches.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 6","pages":"1575-1587"},"PeriodicalIF":3.7,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140170171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big-PERCIVAL: Exploring the Native Use of 64-Bit Posit Arithmetic in Scientific Computing Big-PERCIVAL:探索科学计算中 64 位正则表达式的本地使用
IF 3.7 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-18 DOI: 10.1109/TC.2024.3377890
David Mallasén;Alberto A. Del Barrio;Manuel Prieto-Matias
{"title":"Big-PERCIVAL: Exploring the Native Use of 64-Bit Posit Arithmetic in Scientific Computing","authors":"David Mallasén;Alberto A. Del Barrio;Manuel Prieto-Matias","doi":"10.1109/TC.2024.3377890","DOIUrl":"10.1109/TC.2024.3377890","url":null,"abstract":"The accuracy requirements in many scientific computing workloads result in the use of double-precision floating-point arithmetic in the execution kernels. Nevertheless, emerging real-number representations, such as posit arithmetic, show promise in delivering even higher accuracy in such computations. In this work, we explore the native use of 64-bit posits in a series of numerical benchmarks and compare their timing performance, accuracy and hardware cost to IEEE 754 doubles. In addition, we also study the conjugate gradient method for numerically solving systems of linear equations in real-world applications. For this, we extend the PERCIVAL RISC-V core and the Xposit custom RISC-V extension with posit64 and quire operations. Results show that posit64 can obtain up to 4 orders of magnitude lower mean square error than doubles. This leads to a reduction in the number of iterations required for convergence in some iterative solvers. However, leveraging the quire accumulator register can limit the order of some operations such as matrix multiplications. Furthermore, detailed FPGA and ASIC synthesis results highlight the significant hardware cost of 64-bit posit arithmetic and quire. Despite this, the large accuracy improvements achieved with the same memory bandwidth suggest that posit arithmetic may provide a potential alternative representation for scientific computing.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 6","pages":"1472-1485"},"PeriodicalIF":3.7,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10473215","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140169609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Dynamic Adaptive Framework for Practical Byzantine Fault Tolerance Consensus Protocol in the Internet of Things 物联网中实用拜占庭容错共识协议的动态自适应框架
IF 3.7 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2024-03-18 DOI: 10.1109/TC.2024.3377921
Chunpei Li;Wangjie Qiu;Xianxian Li;Chen Liu;Zhiming Zheng
{"title":"A Dynamic Adaptive Framework for Practical Byzantine Fault Tolerance Consensus Protocol in the Internet of Things","authors":"Chunpei Li;Wangjie Qiu;Xianxian Li;Chen Liu;Zhiming Zheng","doi":"10.1109/TC.2024.3377921","DOIUrl":"10.1109/TC.2024.3377921","url":null,"abstract":"The Practical Byzantine Fault Tolerance (PBFT) protocol-supported blockchain can provide decentralized security and trust mechanisms for the Internet of Things (IoT). However, the PBFT protocol is not specifically designed for IoT applications. Consequently, adapting PBFT to the dynamic changes of an IoT environment with incomplete information represents a challenge that urgently needs to be addressed. To this end, we introduce DA-PBFT, a PBFT dynamic adaptive framework based on a multi-agent architecture. DA-PBFT divides the dynamic adaptive process into two sub-processes: optimality-seeking and optimization decision-making. During the optimality-seeking process, a PBFT optimization model is constructed based on deep reinforcement learning. This model is designed to generate PBFT optimization strategies for consensus nodes. In the optimization decision-making process, a PBFT optimization decision consensus mechanism is constructed based on the Borda count method. This mechanism ensures consistency in PBFT optimization decisions within an environment characterized by incomplete information. Furthermore, we designed a dynamic adaptive incentive mechanism to explore the Nash equilibrium conditions and security aspects of DA-PBFT. The experimental results demonstrate that DA-PBFT is capable of achieving consistency in PBFT optimization decisions within an environment of incomplete information, thereby offering robust and efficient transaction throughput for IoT applications.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"73 7","pages":"1669-1682"},"PeriodicalIF":3.7,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140169591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信