IEEE Transactions on Computers最新文献

筛选
英文 中文
RV-CURE: A RISC-V Capability Architecture for Full Memory Safety RV-CURE:全内存安全的RISC-V能力架构
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-21 DOI: 10.1109/TC.2025.3586029
Yonghae Kim;Anurag Kar;Jaewon Lee;Jaekyu Lee;Hyesoon Kim
{"title":"RV-CURE: A RISC-V Capability Architecture for Full Memory Safety","authors":"Yonghae Kim;Anurag Kar;Jaewon Lee;Jaekyu Lee;Hyesoon Kim","doi":"10.1109/TC.2025.3586029","DOIUrl":"https://doi.org/10.1109/TC.2025.3586029","url":null,"abstract":"Memory-safety violations remain persistent in the real world. Although a tagged-pointer concept has demonstrated significant practical potential, prior work has shown scalability limitations in both performance and security. In this paper, we revisit the tagged-pointer design based on our observation that a pointer tag, stored in a pointer address, can be associated with security metadata and used as a hash to look up a hash table that stores associated metadata. To realize our idea as a new tagging-based memory-capability model, we investigate a hardware-software co-design approach. First, we develop a generalized tagging method, data-pointer tagging (DPT), to ensure full memory safety. DPT assigns a 16-bit tag to each memory object and associates that tag with the object’s capability metadata. On a memory access, DPT then performs a capability check using its associated metadata and validates the access. Furthermore, we design a RISC-V capability architecture, RV-CURE, that implements hardware extensions for DPT and thus enables robust, efficient capability enforcement. Altogether, we prototype a RISC-V evaluation framework, in which we launch FPGA instances running the Linux OS and conduct a full-system simulation. Our evaluation shows that RV-CURE imposes 9.5<inline-formula><tex-math>$-$</tex-math></inline-formula> 19.6% runtime overhead for the SPEC 2017 C/C++ workloads while ensuring strong memory safety.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3291-3304"},"PeriodicalIF":3.8,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LAShards: Low-Overhead and Self-Adaptive MRC Construction for Non-Stack Algorithms 非堆栈算法的低开销和自适应MRC构建
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-21 DOI: 10.1109/TC.2025.3590811
Sanle Zhao;Yujuan Tan;Zhaoyang Zeng;Jing Yu;Zhuoxin Bai;Ao Ren;Xianzhang Chen;Duo Liu
{"title":"LAShards: Low-Overhead and Self-Adaptive MRC Construction for Non-Stack Algorithms","authors":"Sanle Zhao;Yujuan Tan;Zhaoyang Zeng;Jing Yu;Zhuoxin Bai;Ao Ren;Xianzhang Chen;Duo Liu","doi":"10.1109/TC.2025.3590811","DOIUrl":"https://doi.org/10.1109/TC.2025.3590811","url":null,"abstract":"Shared cache systems have become increasingly crucial, especially in cloud services, where the Miss Ratio Curve (MRC) is a widely used tool for evaluating cache performance. The MRC depicts the relationship between the cache miss ratio and cache size, indicating how cache performance trends with varying cache sizes. Recent advancements have enabled efficient MRC construction for stack replacement policies. For non-stack policies, miniature simulation downsizes the actual cache size and data stream through spatially hashed sampling, providing a general method for MRC construction. However, this approach still faces significant challenges. Firstly, constructing an MRC requires numerous mini-caches to obtain miss ratios, consuming significant cache resources, leading to tremendous memory and computing overhead. Secondly, it cannot adapt to the dynamic I/O workloads, resulting in less precise MRC. To address these issues, we propose LAShards, a low-overhead and self-adaptive MRC construction method for non-stack replacement policies. The key idea behind LAShards is to exploit the locality and burstiness in access patterns. It can statically reduce memory usage and dynamically adapt to workloads. Compared to previous works, LAShards can save up to <inline-formula><tex-math>$20boldsymbol{times}$</tex-math></inline-formula> of memory resources, and increase throughput by up to <inline-formula><tex-math>$10boldsymbol{times}$</tex-math></inline-formula>.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3490-3503"},"PeriodicalIF":3.8,"publicationDate":"2025-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WOLF: Weight-Level OutLier and Fault Integration for Reliable LLM Deployment 可靠LLM部署的权重级离群值和故障集成
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-17 DOI: 10.1109/TC.2025.3587957
Chong Wang;Wanyi Fu;Jiangwei Zhang;Shiyao Li;Rui Hou;Jian Yang;Yu Wang
{"title":"WOLF: Weight-Level OutLier and Fault Integration for Reliable LLM Deployment","authors":"Chong Wang;Wanyi Fu;Jiangwei Zhang;Shiyao Li;Rui Hou;Jian Yang;Yu Wang","doi":"10.1109/TC.2025.3587957","DOIUrl":"https://doi.org/10.1109/TC.2025.3587957","url":null,"abstract":"The rapid advancement of Transformer-based large language models (LLMs) is presenting significant challenges for their deployment, primarily due to their enormous parameter sizes and intermediate results, which create a bottleneck in memory capacity for effective inference. Compared to traditional DRAM, Non-Volatile Memory (NVM) technologies such as Resistive Random-Access Memory (RRAM) and Phase-Change Memory (PCM) offer higher integration density, making them promising alternatives. However, before NVM can be widely adopted, its reliability issues, particularly manufacturing defects and endurance faults, must be addressed. In response to the limited memory capacity and reliability challenges of deploying LLMs in NVM, we introduce a novel low-overhead weight-level map, named <small>Wolf</small>. <small>Wolf</small> not only integrates the addresses of faulty weights to support efficient fault tolerance but also includes the addresses of outlier weights in LLMs. This allows for tensor-wise segmented quantization of both outliers and regular weights, enabling lower-bitwidth quantization. The <small>Wolf</small> framework uses a Bloom Filter-based map to efficiently manage outliers and faults. By employing shared hashes for outliers and faults and specific hashes for faults, <small>Wolf</small> significantly reduces the area overhead. Building on <small>Wolf</small>, we propose a novel fault tolerance method that resolves the observed issue of clustering critical incorrect outliers and fully leverages the inherent resilience of LLMs to improve fault tolerance capabilities. As a result, <small>Wolf</small> achieves segment-wise INT4 quantization with enhanced accuracy. Moreover, <small>Wolf</small> can adeptly handle Bit Error Rates as high as <inline-formula><tex-math>$1 {boldsymbol{times}} 10^{-2}$</tex-math></inline-formula> without compromising accuracy, in stark contrast to the state-of-the-art approach where accuracy declines by more than 20%.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3390-3403"},"PeriodicalIF":3.8,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Case for Secure Miniservers Beyond the Edge 超越边缘的安全服务器案例
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-16 DOI: 10.1109/TC.2025.3589691
Salonik Resch;Hüsrev Cılasun;Zamshed I. Chowdhury;Masoud Zabihi;Yang Lv;Jian-Ping Wang;Sachin S. Sapatnekar;Ismail Akturk;Ulya R. Karpuzcu
{"title":"The Case for Secure Miniservers Beyond the Edge","authors":"Salonik Resch;Hüsrev Cılasun;Zamshed I. Chowdhury;Masoud Zabihi;Yang Lv;Jian-Ping Wang;Sachin S. Sapatnekar;Ismail Akturk;Ulya R. Karpuzcu","doi":"10.1109/TC.2025.3589691","DOIUrl":"https://doi.org/10.1109/TC.2025.3589691","url":null,"abstract":"<italic>Beyond edge devices</i> can function off the power grid and without batteries, making them suitable for deployment in hard-to-reach environments. As the energy budget is extremely tight, energy-hungry long-distance communication required for offloading computation or reporting results to a server becomes a significant limitation. Based on the observation that the energy required for communication decreases with shorter distances, this paper makes a case for the deployment of <italic>secure beyond edge miniservers</i>. These are strategically positioned, lightweight local servers designed to support beyond edge devices without compromising the privacy of sensitive information. We demonstrate that even for relatively small scale representative computations – which are more likely to fit into the tight power budget of a beyond edge device for local processing – deploying a beyond edge miniserver can lead to higher performance. To this end, we consider representative deployment scenarios of practical importance, including but not limited to agricultural systems or building structures, where beyond edge miniservers enable highly energy-efficient real-time data processing.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3448-3461"},"PeriodicalIF":3.8,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Highly Reliable Multiplexing Scheme in Hypercube-Structured Hierarchical Networks 超立方体结构分层网络中的高可靠复用方案
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-16 DOI: 10.1109/TC.2025.3589732
Xuanli Liu;Zhenjiang Dong;Weibei Fan;Mengjie Lv;Xueli Sun;Jin Qi;Sun-Yuan Hsieh
{"title":"A Highly Reliable Multiplexing Scheme in Hypercube-Structured Hierarchical Networks","authors":"Xuanli Liu;Zhenjiang Dong;Weibei Fan;Mengjie Lv;Xueli Sun;Jin Qi;Sun-Yuan Hsieh","doi":"10.1109/TC.2025.3589732","DOIUrl":"https://doi.org/10.1109/TC.2025.3589732","url":null,"abstract":"The design and optimization of network topologies play a critical role in ensuring the performance and efficiency of high-performance computing (HPC) systems. Traditional topology designs often fall short in satisfying the stringent requirements of HPC environments, particularly with respect to fault tolerance, latency, and bandwidth. To address these limitations, we propose a novel class of hierarchical networks, termed Hypercube-Structured Hierarchical Networks (HHNs). This architecture generalizes and extends existing architectures such as half hypercube networks and complete cubic networks, while also introducing previously unexplored hierarchical designs. HHNs exhibit several advantages, particularly in high-performance computing. Most notably, their high connectivity enables efficient parallel data processing, and their hierarchical structure supports scalability to accommodate growing computational demands. Furthermore, we present a unicast routing strategy and a broadcast algorithm for HHNs. A fault-tolerant algorithm is also designed based on the construction of disjoint paths. Experimental evaluations demonstrate that HHNs consistently outperform mainstream architectures in critical performance metrics, including scalability, latency, and robustness to failures.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3462-3475"},"PeriodicalIF":3.8,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Highly Scalable Network Architecture for Optical Data Centers 一种用于光数据中心的高可扩展网络架构
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-16 DOI: 10.1109/TC.2025.3589688
Weibei Fan;Yao Pan;Fu Xiao;Pinchang Zhang;Lei Han;Sun-Yuan Hsieh
{"title":"A Highly Scalable Network Architecture for Optical Data Centers","authors":"Weibei Fan;Yao Pan;Fu Xiao;Pinchang Zhang;Lei Han;Sun-Yuan Hsieh","doi":"10.1109/TC.2025.3589688","DOIUrl":"https://doi.org/10.1109/TC.2025.3589688","url":null,"abstract":"Optical Data Center Networks (ODCNs) are high-performance interconnect architectures in parallel and distributed computing, providing higher bandwidth and lower power consumption. However, current optical DCNs struggle to achieve both high scalability and incremental scalability simultaneously. In this paper, we propose an extended <italic>Ex</i>changed hyper<italic>Cube</i>, denoted by ExCube, which is a highly scalable network architecture for optical data centers. Firstly, we detail the address scheme and constructing method for ExCube, including exponential, linear, and composite scalability, which can adapt to different scalability requirements. ExCube boasts flexible scalability modes, including exponential, linear, and composite scalability, meeting diverse scalability requirements. In particular, the diameter of ExCube remains unchanged as its size increases linearly, indicating superior incremental scalability. Secondly, an efficient routing algorithm with linear time complexity is presented to determine the shortest path between any two different ToRs in ExCube. Additionally, we propose a per-flow scheduling algorithm based on the disjoint paths to enhance the performance of ExCube. The optical devices in ExCube are identical to those in existing optical DCNs, such as WaveCube and OSA, facilitating its construction. Experimental results demonstrate that ExCube outperforms WaveCube in terms of throughput and reduces data transmission time by 5%-35%. Further analysis reveals that ExCube maintains comparable performance to WaveCube across several critical metrics, including low diameter and link complexity. Compared with advanced networks, the overall cost-effectiveness and energy efficiency of ExCube have been reduced by 36.7% and 46.5%, respectively.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3433-3447"},"PeriodicalIF":3.8,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AdaptDQC: Adaptive Distributed Quantum Computing With Quantitative Performance Analysis AdaptDQC:自适应分布式量子计算与定量性能分析
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-14 DOI: 10.1109/TC.2025.3586027
Debin Xiang;Liqiang Lu;Siwei Tan;Xinghui Jia;Zhe Zhou;Guangyu Sun;Mingshuai Chen;Jianwei Yin
{"title":"AdaptDQC: Adaptive Distributed Quantum Computing With Quantitative Performance Analysis","authors":"Debin Xiang;Liqiang Lu;Siwei Tan;Xinghui Jia;Zhe Zhou;Guangyu Sun;Mingshuai Chen;Jianwei Yin","doi":"10.1109/TC.2025.3586027","DOIUrl":"https://doi.org/10.1109/TC.2025.3586027","url":null,"abstract":"We present AdaptDQC, an adaptive compiler framework for optimizing distributed quantum computing (DQC) under diverse performance metrics and inter-chip communication (ICC) architectures. AdaptDQC leverages a novel spatial-temporal graph model to describe quantum circuits, model ICC architectures, and quantify critical performance metrics in DQC systems, yielding a systematic and adaptive approach to constructing circuit-partitioning and chip-mapping strategies that admit hybrid ICC architectures and are optimized against various objectives. Experimental results on a collection of benchmarks show that AdaptDQC outperforms state-of-the-art compiler frameworks: It reduces, on average, the communication cost by up to 35.4% and the latency by up to 38.4%.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3277-3290"},"PeriodicalIF":3.8,"publicationDate":"2025-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11080164","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GATe: Efficient Graph Attention Network Acceleration With Near-Memory Processing GATe:基于近记忆处理的高效图注意网络加速
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-11 DOI: 10.1109/TC.2025.3588317
Shiyan Yi;Yudi Qiu;Guohao Xu;Lingfei Lu;Xiaoyang Zeng;Yibo Fan
{"title":"GATe: Efficient Graph Attention Network Acceleration With Near-Memory Processing","authors":"Shiyan Yi;Yudi Qiu;Guohao Xu;Lingfei Lu;Xiaoyang Zeng;Yibo Fan","doi":"10.1109/TC.2025.3588317","DOIUrl":"https://doi.org/10.1109/TC.2025.3588317","url":null,"abstract":"Graph Attention Network (GAT) has gained widespread adoption thanks to its exceptional performance in processing non-Euclidean graphs. The critical components of a GAT model involve aggregation and attention, which cause numerous main-memory access, occupying significant inference time. Recently, much research has proposed near-memory processing (NMP) architectures to accelerate aggregation. However, graph attention requires additional operations distinct from aggregation, making previous NMP architectures less suitable for supporting GAT, as they typically target aggregation-only workloads. In this paper, we propose GATe, a practical and efficient <u>GAT</u> acc<u>e</u>lerator with NMP architecture. To the best of our knowledge, this is the first time that accelerates both attention and aggregation computation on DIMM. We unify feature vector access to eliminate the two repetitive memory accesses to source nodes caused by the sequential phase-by-phase execution of attention and aggregation. Next, we refine the computation flow to reduce data dependencies in concatenation and softmax, which lowers on-chip memory usage and communication overhead. Additionally, we introduce a novel sharding method that enhances data reusability of high-degree nodes. Experiments show that GATe achieves substantial speedup of GAT attention and aggregation phases up to 6.77<inline-formula><tex-math>${boldsymboltimes}$</tex-math></inline-formula> and 2.46<inline-formula><tex-math>${boldsymboltimes}$</tex-math></inline-formula>, with average to 3.69<inline-formula><tex-math>${boldsymboltimes}$</tex-math></inline-formula> and 2.24<inline-formula><tex-math>${boldsymboltimes}$</tex-math></inline-formula>, respectively, compared to state-of-the-art NMP works GNNear and GraNDe.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3419-3432"},"PeriodicalIF":3.8,"publicationDate":"2025-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ML-PTA: A Two-Stage ML-Enhanced Framework for Accelerating Nonlinear DC Circuit Simulation With Pseudo-Transient Analysis ML-PTA:用伪瞬态分析加速非线性直流电路仿真的两阶段ml增强框架
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-10 DOI: 10.1109/TC.2025.3587470
Zhou Jin;Wenhao Li;Haojie Pei;Xiaru Zha;Yichao Dong;Xiang Jin;Xiao Wu;Dan Niu;Wei W. Xing
{"title":"ML-PTA: A Two-Stage ML-Enhanced Framework for Accelerating Nonlinear DC Circuit Simulation With Pseudo-Transient Analysis","authors":"Zhou Jin;Wenhao Li;Haojie Pei;Xiaru Zha;Yichao Dong;Xiang Jin;Xiao Wu;Dan Niu;Wei W. Xing","doi":"10.1109/TC.2025.3587470","DOIUrl":"https://doi.org/10.1109/TC.2025.3587470","url":null,"abstract":"Direct current (DC) analysis lies at the heart of integrated circuit design in seeking DC operating points. Although pseudo-transient analysis (PTA) methods have been widely used in DC analysis in both industry and academia, their initial parameters and stepping strategy require expert knowledge and labor tuning to deliver efficient performance, which hinders their further applications. In this paper, we leverage the latest advancements in machine learning to deploy PTA with more efficient setups for different problems. More specifically, active learning, which automatically draws knowledge from other circuits, is used to provide suitable initial parameters for PTA solver, and then calibrate on-the-fly to further accelerate the simulation process using TD3-based reinforcement learning (RL). To expedite model convergence, we introduce dual agents and a public sampling buffer in our RL method to enhance sample utilization. To further improve the learning efficiency of the RL agent, we incorporate imitation learning to improve reward function and introduce supervised learning to provide a better dual-agent rotation strategy. We make the proposed algorithm a general out-of-the-box SPICE-like solver and assess it on a variety of circuits, demonstrating up to 3.10<inline-formula><tex-math>$boldsymboltimes$</tex-math></inline-formula> reduction in NR iterations for the initial stage and 285.71<inline-formula><tex-math>$boldsymboltimes$</tex-math></inline-formula> for the RL stage.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 10","pages":"3319-3331"},"PeriodicalIF":3.8,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145061859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergistic Memory Optimisations: Precision Tuning in Heterogeneous Memory Hierarchies 协同内存优化:异构内存层次的精确调优
IF 3.8 2区 计算机科学
IEEE Transactions on Computers Pub Date : 2025-07-10 DOI: 10.1109/TC.2025.3586025
Gabriele Magnani;Daniele Cattaneo;Lev Denisov;Giuseppe Tagliavini;Giovanni Agosta;Stefano Cherubin
{"title":"Synergistic Memory Optimisations: Precision Tuning in Heterogeneous Memory Hierarchies","authors":"Gabriele Magnani;Daniele Cattaneo;Lev Denisov;Giuseppe Tagliavini;Giovanni Agosta;Stefano Cherubin","doi":"10.1109/TC.2025.3586025","DOIUrl":"https://doi.org/10.1109/TC.2025.3586025","url":null,"abstract":"Balancing energy efficiency and high performance in embedded systems requires fine-tuning hardware and software components to co-optimize their interaction. In this work, we address the automated optimization of memory usage through a compiler toolchain that leverages DMA-aware precision tuning and mathematical function memorization. The proposed solution extends the <small>llvm</small> infrastructure, employing the <small>taffo</small> plugins for precision tuning, with the <small>SeTHet</small> extension for DMA-aware precision tuning and <small>luTHet</small> for automated, DMA-aware mathematical function memorization. We performed an experimental assessment on <small>hero</small>, a heterogeneous platform employing <small>risc-v</small> cores as a parallel accelerator. Our solution enables speedups ranging from <inline-formula><tex-math>$1.5boldsymbol{times}$</tex-math></inline-formula> to <inline-formula><tex-math>$51.1boldsymbol{times}$</tex-math></inline-formula> on AxBench benchmarks that employ trigonometrical functions and <inline-formula><tex-math>$4.23-48.4boldsymbol{times}$</tex-math></inline-formula> on Polybench benchmarks over the baseline <small>hero</small> platform.","PeriodicalId":13087,"journal":{"name":"IEEE Transactions on Computers","volume":"74 9","pages":"3168-3180"},"PeriodicalIF":3.8,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144831908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信