{"title":"深度学习语言建模工作负载:图形处理器的发展方向","authors":"Ali Hadi Zadeh, Zissis Poulos, Andreas Moshovos","doi":"10.1109/IISWC47752.2019.9041972","DOIUrl":null,"url":null,"abstract":"Language Modeling is at the core of many natural language processing tasks. We analyze two such recent models: a Gated Convolutional Network (GCN) with five layers on the Wikitext-2 dataset and a Transformer network with 24 layers on the Google Billion Word dataset. We find that when executed on modern graphics processors, 30% - 40% of the execution time is due to the final adaptive softmax layer. Analytical modeling of the computation and memory demands of the GCN shows that this behavior will persist even if the hidden state is increased - which could be needed to improve accuracy or to support a wider vocabulary. We present variations of the adaptive softmax layer that reduce execution time for the layer by 40% and that scale better with the hidden state.","PeriodicalId":121068,"journal":{"name":"2019 IEEE International Symposium on Workload Characterization (IISWC)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Deep Learning Language Modeling Workloads: Where Time Goes on Graphics Processors\",\"authors\":\"Ali Hadi Zadeh, Zissis Poulos, Andreas Moshovos\",\"doi\":\"10.1109/IISWC47752.2019.9041972\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Language Modeling is at the core of many natural language processing tasks. We analyze two such recent models: a Gated Convolutional Network (GCN) with five layers on the Wikitext-2 dataset and a Transformer network with 24 layers on the Google Billion Word dataset. We find that when executed on modern graphics processors, 30% - 40% of the execution time is due to the final adaptive softmax layer. Analytical modeling of the computation and memory demands of the GCN shows that this behavior will persist even if the hidden state is increased - which could be needed to improve accuracy or to support a wider vocabulary. We present variations of the adaptive softmax layer that reduce execution time for the layer by 40% and that scale better with the hidden state.\",\"PeriodicalId\":121068,\"journal\":{\"name\":\"2019 IEEE International Symposium on Workload Characterization (IISWC)\",\"volume\":\"16 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE International Symposium on Workload Characterization (IISWC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IISWC47752.2019.9041972\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE International Symposium on Workload Characterization (IISWC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISWC47752.2019.9041972","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Deep Learning Language Modeling Workloads: Where Time Goes on Graphics Processors
Language Modeling is at the core of many natural language processing tasks. We analyze two such recent models: a Gated Convolutional Network (GCN) with five layers on the Wikitext-2 dataset and a Transformer network with 24 layers on the Google Billion Word dataset. We find that when executed on modern graphics processors, 30% - 40% of the execution time is due to the final adaptive softmax layer. Analytical modeling of the computation and memory demands of the GCN shows that this behavior will persist even if the hidden state is increased - which could be needed to improve accuracy or to support a wider vocabulary. We present variations of the adaptive softmax layer that reduce execution time for the layer by 40% and that scale better with the hidden state.