{"title":"以张量处理器为重点的深度神经网络加速器","authors":"Hamidreza Bolhasani , Mohammad Marandinejad","doi":"10.1016/j.micpro.2023.105005","DOIUrl":null,"url":null,"abstract":"<div><p><span>The massive amount of data and the problem of processing them is one of the main challenges of the digital age, and the development of artificial intelligence and </span>machine learning<span> can be useful in solving this problem. Using deep neural networks<span> to improve the efficiency of these two areas is a good solution. So far, several architectures have been introduced for data processing with the benefit of deep neural networks, whose accuracy, efficiency, and computing power are different from each other. This article tries to review these architectures, their features, and their functions in a systematic way. According to the current research style, 24 articles (conference and research articles related to this topic) have been evaluated in the period of 2014–2022. In fact, the significant aspects of the selected articles are compared and at the end, the upcoming challenges and topics for future research are presented. The results show that the main parameters for proposing a new tensor processor include increasing speed and accuracy and reducing data processing time, reducing on-chip storage space, reducing DRAM access, reducing energy consumption, and achieving high efficiency.</span></span></p></div>","PeriodicalId":49815,"journal":{"name":"Microprocessors and Microsystems","volume":"105 ","pages":"Article 105005"},"PeriodicalIF":1.9000,"publicationDate":"2023-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Deep neural networks accelerators with focus on tensor processors\",\"authors\":\"Hamidreza Bolhasani , Mohammad Marandinejad\",\"doi\":\"10.1016/j.micpro.2023.105005\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p><span>The massive amount of data and the problem of processing them is one of the main challenges of the digital age, and the development of artificial intelligence and </span>machine learning<span> can be useful in solving this problem. Using deep neural networks<span> to improve the efficiency of these two areas is a good solution. So far, several architectures have been introduced for data processing with the benefit of deep neural networks, whose accuracy, efficiency, and computing power are different from each other. This article tries to review these architectures, their features, and their functions in a systematic way. According to the current research style, 24 articles (conference and research articles related to this topic) have been evaluated in the period of 2014–2022. In fact, the significant aspects of the selected articles are compared and at the end, the upcoming challenges and topics for future research are presented. The results show that the main parameters for proposing a new tensor processor include increasing speed and accuracy and reducing data processing time, reducing on-chip storage space, reducing DRAM access, reducing energy consumption, and achieving high efficiency.</span></span></p></div>\",\"PeriodicalId\":49815,\"journal\":{\"name\":\"Microprocessors and Microsystems\",\"volume\":\"105 \",\"pages\":\"Article 105005\"},\"PeriodicalIF\":1.9000,\"publicationDate\":\"2023-12-27\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Microprocessors and Microsystems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0141933123002508\",\"RegionNum\":4,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Microprocessors and Microsystems","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141933123002508","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
摘要
海量数据及其处理问题是数字时代的主要挑战之一,人工智能和机器学习的发展有助于解决这一问题。利用深度神经网络提高这两个领域的效率是一个很好的解决方案。迄今为止,已经推出了几种利用深度神经网络进行数据处理的架构,它们的精度、效率和计算能力各不相同。本文试图系统回顾这些架构、它们的特点和功能。根据目前的研究风格,2014-2022 年期间共评估了 24 篇文章(与该主题相关的会议和研究文章)。事实上,对所选文章的重要方面进行了比较,并在最后提出了即将面临的挑战和未来研究的主题。结果表明,提出新型张量处理器的主要参数包括:提高速度和精度、缩短数据处理时间、减少片上存储空间、减少 DRAM 访问、降低能耗和实现高效率。
Deep neural networks accelerators with focus on tensor processors
The massive amount of data and the problem of processing them is one of the main challenges of the digital age, and the development of artificial intelligence and machine learning can be useful in solving this problem. Using deep neural networks to improve the efficiency of these two areas is a good solution. So far, several architectures have been introduced for data processing with the benefit of deep neural networks, whose accuracy, efficiency, and computing power are different from each other. This article tries to review these architectures, their features, and their functions in a systematic way. According to the current research style, 24 articles (conference and research articles related to this topic) have been evaluated in the period of 2014–2022. In fact, the significant aspects of the selected articles are compared and at the end, the upcoming challenges and topics for future research are presented. The results show that the main parameters for proposing a new tensor processor include increasing speed and accuracy and reducing data processing time, reducing on-chip storage space, reducing DRAM access, reducing energy consumption, and achieving high efficiency.
期刊介绍:
Microprocessors and Microsystems: Embedded Hardware Design (MICPRO) is a journal covering all design and architectural aspects related to embedded systems hardware. This includes different embedded system hardware platforms ranging from custom hardware via reconfigurable systems and application specific processors to general purpose embedded processors. Special emphasis is put on novel complex embedded architectures, such as systems on chip (SoC), systems on a programmable/reconfigurable chip (SoPC) and multi-processor systems on a chip (MPSoC), as well as, their memory and communication methods and structures, such as network-on-chip (NoC).
Design automation of such systems including methodologies, techniques, flows and tools for their design, as well as, novel designs of hardware components fall within the scope of this journal. Novel cyber-physical applications that use embedded systems are also central in this journal. While software is not in the main focus of this journal, methods of hardware/software co-design, as well as, application restructuring and mapping to embedded hardware platforms, that consider interplay between software and hardware components with emphasis on hardware, are also in the journal scope.