{"title":"星尘:多芯片系统上大型人工智能的可扩展和可转移工作负载映射","authors":"Wencheng Zou;Feiyun Zhao;Nan Wu","doi":"10.1109/LCA.2025.3580562","DOIUrl":null,"url":null,"abstract":"Workload partitioning and mapping are critical to optimizing performance in multi-chiplet systems. However, existing approaches struggle with scalability in large search spaces and lack transferability across different workloads. To overcome these limitations, we propose <sc>Stardust</small>, a <underline>s</u>calable and <underline>t</u>r<underline>a</u>nsfe<underline>r</u>able workloa<underline>d</u> mapping on m<underline>u</u>lti-chiplet sy<underline>st</u>ems. <sc>Stardust</small> combines learnable graph clustering to downscale computation graphs for efficient partitioning, topology-masked attention to capture structural information, and deep reinforcement learning (DRL) for optimized workload mapping. Evaluations on production-scale AI models show that (1) <sc>Stardust</small>-generated mappings significantly outperform commonly used heuristics in throughput, and (2) fine-tuning a pre-trained <sc>Stardust</small> model improves sample efficiency by up to 15× compared to training from scratch.","PeriodicalId":51248,"journal":{"name":"IEEE Computer Architecture Letters","volume":"24 2","pages":"201-204"},"PeriodicalIF":1.4000,"publicationDate":"2025-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Stardust: Scalable and Transferable Workload Mapping for Large AI on Multi-Chiplet Systems\",\"authors\":\"Wencheng Zou;Feiyun Zhao;Nan Wu\",\"doi\":\"10.1109/LCA.2025.3580562\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Workload partitioning and mapping are critical to optimizing performance in multi-chiplet systems. However, existing approaches struggle with scalability in large search spaces and lack transferability across different workloads. To overcome these limitations, we propose <sc>Stardust</small>, a <underline>s</u>calable and <underline>t</u>r<underline>a</u>nsfe<underline>r</u>able workloa<underline>d</u> mapping on m<underline>u</u>lti-chiplet sy<underline>st</u>ems. <sc>Stardust</small> combines learnable graph clustering to downscale computation graphs for efficient partitioning, topology-masked attention to capture structural information, and deep reinforcement learning (DRL) for optimized workload mapping. Evaluations on production-scale AI models show that (1) <sc>Stardust</small>-generated mappings significantly outperform commonly used heuristics in throughput, and (2) fine-tuning a pre-trained <sc>Stardust</small> model improves sample efficiency by up to 15× compared to training from scratch.\",\"PeriodicalId\":51248,\"journal\":{\"name\":\"IEEE Computer Architecture Letters\",\"volume\":\"24 2\",\"pages\":\"201-204\"},\"PeriodicalIF\":1.4000,\"publicationDate\":\"2025-06-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"IEEE Computer Architecture Letters\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://ieeexplore.ieee.org/document/11039063/\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Computer Architecture Letters","FirstCategoryId":"94","ListUrlMain":"https://ieeexplore.ieee.org/document/11039063/","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
Stardust: Scalable and Transferable Workload Mapping for Large AI on Multi-Chiplet Systems
Workload partitioning and mapping are critical to optimizing performance in multi-chiplet systems. However, existing approaches struggle with scalability in large search spaces and lack transferability across different workloads. To overcome these limitations, we propose Stardust, a scalable and transferable workload mapping on multi-chiplet systems. Stardust combines learnable graph clustering to downscale computation graphs for efficient partitioning, topology-masked attention to capture structural information, and deep reinforcement learning (DRL) for optimized workload mapping. Evaluations on production-scale AI models show that (1) Stardust-generated mappings significantly outperform commonly used heuristics in throughput, and (2) fine-tuning a pre-trained Stardust model improves sample efficiency by up to 15× compared to training from scratch.
期刊介绍:
IEEE Computer Architecture Letters is a rigorously peer-reviewed forum for publishing early, high-impact results in the areas of uni- and multiprocessor computer systems, computer architecture, microarchitecture, workload characterization, performance evaluation and simulation techniques, and power-aware computing. Submissions are welcomed on any topic in computer architecture, especially but not limited to: microprocessor and multiprocessor systems, microarchitecture and ILP processors, workload characterization, performance evaluation and simulation techniques, compiler-hardware and operating system-hardware interactions, interconnect architectures, memory and cache systems, power and thermal issues at the architecture level, I/O architectures and techniques, independent validation of previously published results, analysis of unsuccessful techniques, domain-specific processor architectures (e.g., embedded, graphics, network, etc.), real-time and high-availability architectures, reconfigurable systems.