Oluwaseun Adewunmi Alo, Sairam Sri Vatsavai, Ishan Thakkar
{"title":"Scaling Analog Photonic Accelerators for Byte-Size, Integer General Matrix Multiply (GEMM) Kernels","authors":"Oluwaseun Adewunmi Alo, Sairam Sri Vatsavai, Ishan Thakkar","doi":"arxiv-2407.06134","DOIUrl":null,"url":null,"abstract":"Deep Neural Networks (DNNs) predominantly rely on General Matrix Multiply\n(GEMM) kernels, which are often accelerated using specialized hardware\narchitectures. Recently, analog photonic GEMM accelerators have emerged as a\npromising alternative, offering vastly superior speed and energy efficiency\ncompared to traditional electronic accelerators. However, these photonic cannot\nsupport wider than 4-bit integer operands due to their inherent trade-offs\nbetween analog dynamic range and parallelism. This is often inadequate for DNN\ntraining as at least 8-bit wide operands are deemed necessary to prevent\nsignificant accuracy drops. To address these limitations, we introduce a\nscalable photonic GEMM accelerator named SPOGA. SPOGA utilizes enhanced\nfeatures such as analog summation of homodyne optical signals and\nin-transduction positional weighting of operands. By employing an extended\noptical-analog dataflow that minimizes overheads associated with bit-sliced\ninteger arithmetic, SPOGA supports byte-size integer GEMM kernels, achieving\nsignificant improvements in throughput, latency, and energy efficiency.\nSpecifically, SPOGA demonstrates up to 14.4$\\times$, 2$\\times$, and\n28.5$\\times$ improvements in frames-per-second (FPS), FPS/Watt, and\nFPS/Watt/mm$^2$ respectively, compared to existing state-of-the-art photonic\nsolutions.","PeriodicalId":501291,"journal":{"name":"arXiv - CS - Performance","volume":"11 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Performance","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2407.06134","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep Neural Networks (DNNs) predominantly rely on General Matrix Multiply
(GEMM) kernels, which are often accelerated using specialized hardware
architectures. Recently, analog photonic GEMM accelerators have emerged as a
promising alternative, offering vastly superior speed and energy efficiency
compared to traditional electronic accelerators. However, these photonic cannot
support wider than 4-bit integer operands due to their inherent trade-offs
between analog dynamic range and parallelism. This is often inadequate for DNN
training as at least 8-bit wide operands are deemed necessary to prevent
significant accuracy drops. To address these limitations, we introduce a
scalable photonic GEMM accelerator named SPOGA. SPOGA utilizes enhanced
features such as analog summation of homodyne optical signals and
in-transduction positional weighting of operands. By employing an extended
optical-analog dataflow that minimizes overheads associated with bit-sliced
integer arithmetic, SPOGA supports byte-size integer GEMM kernels, achieving
significant improvements in throughput, latency, and energy efficiency.
Specifically, SPOGA demonstrates up to 14.4$\times$, 2$\times$, and
28.5$\times$ improvements in frames-per-second (FPS), FPS/Watt, and
FPS/Watt/mm$^2$ respectively, compared to existing state-of-the-art photonic
solutions.