{"title":"Packed SIMD Vectorization of the DRAGON2-CB","authors":"Riadh Ben Abdelhamid, Y. Yamaguchi","doi":"10.1109/MCSoC57363.2022.00023","DOIUrl":null,"url":null,"abstract":"For over a half-century, computer architects have explored micro-architecture, instruction set architecture, and system architecture to offer a significant performance boost out of a computing chip. In the micro-architecture, multi-processing and multi-threading arose as fusing highly parallel processing and the growth of semiconductor manufacturing technology. It has caused a paradigm shift in computing chips and led to the many-core processor age, such as NVIDIA GPUs, Movidius Myriad, PEZY ZettaScaler, and the project Eyeriss based on a reconfigurable accelerator. Wherein packed SIMD (Single Instruction Multiple Data) vectorizations attract attention, especially from ML (machine learning) applications. It can achieve more energy-efficient computing by reducing computing precision, which is enough for ML applications to obtain the results with low-accuracy calculations. In other words, accuracy-flexible computing needs to allow splitting off one N-bit ALU (Arithmetic Logic Unit) or one N-bit FPU (Floating-Point Unit) into multiple $M$-bit units. For example, a double-precision (64-bit operands width) FPU can be split into two single-precision (32-bit operands width) FPUs, or four half-precision (16-bit operands width) FPUs. Consequently, instead of executing one original operation, a packed SIMD vectorization simultaneously enables executing two or four reduced-precision operations. This article proposes a packed SIMD vectorization approach, which considers the Dynamically Reprogrammable Architecture of Gather-scatter Overlay Nodes-Compact Buffering (DRAGON2-CB) many-core overlay architecture. In particular, this article presents a thorough comparative study between packed SIMD using dual single-precision and quad half-precision FPU-only many-core overlays compared to the non-vectorized double-precision version.","PeriodicalId":150801,"journal":{"name":"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2022-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 15th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/MCSoC57363.2022.00023","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
For over a half-century, computer architects have explored micro-architecture, instruction set architecture, and system architecture to offer a significant performance boost out of a computing chip. In the micro-architecture, multi-processing and multi-threading arose as fusing highly parallel processing and the growth of semiconductor manufacturing technology. It has caused a paradigm shift in computing chips and led to the many-core processor age, such as NVIDIA GPUs, Movidius Myriad, PEZY ZettaScaler, and the project Eyeriss based on a reconfigurable accelerator. Wherein packed SIMD (Single Instruction Multiple Data) vectorizations attract attention, especially from ML (machine learning) applications. It can achieve more energy-efficient computing by reducing computing precision, which is enough for ML applications to obtain the results with low-accuracy calculations. In other words, accuracy-flexible computing needs to allow splitting off one N-bit ALU (Arithmetic Logic Unit) or one N-bit FPU (Floating-Point Unit) into multiple $M$-bit units. For example, a double-precision (64-bit operands width) FPU can be split into two single-precision (32-bit operands width) FPUs, or four half-precision (16-bit operands width) FPUs. Consequently, instead of executing one original operation, a packed SIMD vectorization simultaneously enables executing two or four reduced-precision operations. This article proposes a packed SIMD vectorization approach, which considers the Dynamically Reprogrammable Architecture of Gather-scatter Overlay Nodes-Compact Buffering (DRAGON2-CB) many-core overlay architecture. In particular, this article presents a thorough comparative study between packed SIMD using dual single-precision and quad half-precision FPU-only many-core overlays compared to the non-vectorized double-precision version.