{"title":"Extending the Moore's law by exploring new data center architecture: Invited Paper","authors":"Jian Ouyang, Wei Qi, Yong Wang","doi":"10.1145/2934583.2953981","DOIUrl":null,"url":null,"abstract":"In recent ten years, lots of new applications emerged, such as AI, big data and cloud. Though the workloads of these applications are very diverse, they demand huge resource of data center. In contrast, the silicon technology moves slower and slower because the Moore's law is going to the end. Consequently, the data center building from commodity hardware cannot provide enough cost-efficiency and power-efficiency. To meet the increasingly resource needs of emerging applications, the scale of data center is become much larger and larger. It consumes huge power and cost of hardware. From the business perspective, the slow development of hardware technology limits the value creation of emerging applications. We, Baidu, the largest search engine in China, have faced this challenge in several years ago. We find that the server number increases much faster than the scale of business. And this case is common for internet companies. Because the iteration of general processor becomes slower and slower. For example, Intel announced that the Tick-Tock production strategic was out of date in this early year. This problem drive us to look for new methods to boost business. From Internet Company's perspective, building new chips or new architecture based on its applications' characteristics makes sense. This method can break the limitation of commodity chips and commodity hardware. And according to academic and industry experiences, domain-specified architecture can achieve much better performance and power efficiency than general architecture. Consequently, we are exploring new architecture to extend Moore's law. In this paper, we present the works on exploring new architecture for data center. The data center resource includes storage, memory, computing and networking. Hence, we focus on these four areas. Firstly, we implemented SDF for large-scale distributed storage system. The SDF aims to low cost and high performance flash storage system. Secondly, we implemented SDA for deep learning big data. The SDA is dedicated to solve the computing bottle of emerging applications. The left paper is organized as following. The section 2 is about SDF [1]. The section 3 describes SDA for deep learning [2]. Section 4 presents SDA for big data [3]. And the last section is the conclusion.","PeriodicalId":142716,"journal":{"name":"Proceedings of the 2016 International Symposium on Low Power Electronics and Design","volume":"1650 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2016 International Symposium on Low Power Electronics and Design","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/2934583.2953981","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
In recent ten years, lots of new applications emerged, such as AI, big data and cloud. Though the workloads of these applications are very diverse, they demand huge resource of data center. In contrast, the silicon technology moves slower and slower because the Moore's law is going to the end. Consequently, the data center building from commodity hardware cannot provide enough cost-efficiency and power-efficiency. To meet the increasingly resource needs of emerging applications, the scale of data center is become much larger and larger. It consumes huge power and cost of hardware. From the business perspective, the slow development of hardware technology limits the value creation of emerging applications. We, Baidu, the largest search engine in China, have faced this challenge in several years ago. We find that the server number increases much faster than the scale of business. And this case is common for internet companies. Because the iteration of general processor becomes slower and slower. For example, Intel announced that the Tick-Tock production strategic was out of date in this early year. This problem drive us to look for new methods to boost business. From Internet Company's perspective, building new chips or new architecture based on its applications' characteristics makes sense. This method can break the limitation of commodity chips and commodity hardware. And according to academic and industry experiences, domain-specified architecture can achieve much better performance and power efficiency than general architecture. Consequently, we are exploring new architecture to extend Moore's law. In this paper, we present the works on exploring new architecture for data center. The data center resource includes storage, memory, computing and networking. Hence, we focus on these four areas. Firstly, we implemented SDF for large-scale distributed storage system. The SDF aims to low cost and high performance flash storage system. Secondly, we implemented SDA for deep learning big data. The SDA is dedicated to solve the computing bottle of emerging applications. The left paper is organized as following. The section 2 is about SDF [1]. The section 3 describes SDA for deep learning [2]. Section 4 presents SDA for big data [3]. And the last section is the conclusion.