{"title":"在人工智能和大规模并行处理中广泛使用的GPU设备的演变","authors":"T. Baji","doi":"10.1109/EDTM.2018.8421507","DOIUrl":null,"url":null,"abstract":"While the CPU performance cannot benefits anymore from Moore's law, GPU (Graphic Processing Unit) still continue to increase its performance 1.5times/year. From this reason, GPU is now widely used not only for computer graphics but also for massive parallel processing and AI (Artificial Intelligence). In this paper, the details of this continuous performance growth, the constant evolution in transistors count and die size, and the scalable GPU architecture will be described.","PeriodicalId":418495,"journal":{"name":"2018 IEEE 2nd Electron Devices Technology and Manufacturing Conference (EDTM)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"25","resultStr":"{\"title\":\"Evolution of the GPU Device widely used in AI and Massive Parallel Processing\",\"authors\":\"T. Baji\",\"doi\":\"10.1109/EDTM.2018.8421507\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"While the CPU performance cannot benefits anymore from Moore's law, GPU (Graphic Processing Unit) still continue to increase its performance 1.5times/year. From this reason, GPU is now widely used not only for computer graphics but also for massive parallel processing and AI (Artificial Intelligence). In this paper, the details of this continuous performance growth, the constant evolution in transistors count and die size, and the scalable GPU architecture will be described.\",\"PeriodicalId\":418495,\"journal\":{\"name\":\"2018 IEEE 2nd Electron Devices Technology and Manufacturing Conference (EDTM)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-03-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"25\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE 2nd Electron Devices Technology and Manufacturing Conference (EDTM)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/EDTM.2018.8421507\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 2nd Electron Devices Technology and Manufacturing Conference (EDTM)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/EDTM.2018.8421507","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evolution of the GPU Device widely used in AI and Massive Parallel Processing
While the CPU performance cannot benefits anymore from Moore's law, GPU (Graphic Processing Unit) still continue to increase its performance 1.5times/year. From this reason, GPU is now widely used not only for computer graphics but also for massive parallel processing and AI (Artificial Intelligence). In this paper, the details of this continuous performance growth, the constant evolution in transistors count and die size, and the scalable GPU architecture will be described.