Ruben Vazquez, Islam Badreldin, Mohamad Hammam Alsafrjalani, A. Gordon-Ross
{"title":"Machine Learning-based Prediction for Phase-Based Dynamic Architectural Specialization","authors":"Ruben Vazquez, Islam Badreldin, Mohamad Hammam Alsafrjalani, A. Gordon-Ross","doi":"10.1109/ISVLSI.2019.00101","DOIUrl":null,"url":null,"abstract":"Embedded computing systems are becoming increasingly complex, now performing tasks that were generally limited to desktop computing systems. However, embedded system designers are still required to adhere to stringent embedded design constraints (e.g., energy and area requirements) when designing such increasingly complex systems. To meet these constraints, configurable hardware components introduce configurable parameters (e.g., CPU voltage and frequency, cache size, cache associativity, cache line size, pipeline depth/width, etc.) that can be tuned to specific values to meet different design constraints (e.g., area, energy, performance, etc.) and user demands (e.g., increased battery life, increased performance, or a desired trade off), which translates to a better quality of the user experience. However, determining these specific parameter values is increasingly difficult and time-consuming as the configurable parameter design space increases. This issue is further complicated when considering that each application has a different set of optimal/best parameter values based on these demands and requirements. Furthermore, repetitious application behavior, known as phases, which occur throughout an application's runtime, can be exploited by tracking each phase's unique optimal parameter values; resulting in a multiplicative increase or an exponential increase in the size of the size of the configuration space. In this paper, we propose a machine learning-based methodology to significantly reduce the time required to find the optimal configurable parameter values for the instruction and data caches for each application phase. In our method, we use artificial neural networks (ANNs) to predict the optimal configuration for application phases. We collect execution statistics for use as features for an application phase and use feature reduction to significantly reduce the features size. We show that ANNs exhibit high, stable accuracy over multiple training and testing iterations. We also show that applications exhibit low energy degradations (less than 1%) for both the instruction and data caches using our methodology.","PeriodicalId":6703,"journal":{"name":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","volume":"5 1","pages":"529-534"},"PeriodicalIF":0.0000,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE Computer Society Annual Symposium on VLSI (ISVLSI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISVLSI.2019.00101","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Embedded computing systems are becoming increasingly complex, now performing tasks that were generally limited to desktop computing systems. However, embedded system designers are still required to adhere to stringent embedded design constraints (e.g., energy and area requirements) when designing such increasingly complex systems. To meet these constraints, configurable hardware components introduce configurable parameters (e.g., CPU voltage and frequency, cache size, cache associativity, cache line size, pipeline depth/width, etc.) that can be tuned to specific values to meet different design constraints (e.g., area, energy, performance, etc.) and user demands (e.g., increased battery life, increased performance, or a desired trade off), which translates to a better quality of the user experience. However, determining these specific parameter values is increasingly difficult and time-consuming as the configurable parameter design space increases. This issue is further complicated when considering that each application has a different set of optimal/best parameter values based on these demands and requirements. Furthermore, repetitious application behavior, known as phases, which occur throughout an application's runtime, can be exploited by tracking each phase's unique optimal parameter values; resulting in a multiplicative increase or an exponential increase in the size of the size of the configuration space. In this paper, we propose a machine learning-based methodology to significantly reduce the time required to find the optimal configurable parameter values for the instruction and data caches for each application phase. In our method, we use artificial neural networks (ANNs) to predict the optimal configuration for application phases. We collect execution statistics for use as features for an application phase and use feature reduction to significantly reduce the features size. We show that ANNs exhibit high, stable accuracy over multiple training and testing iterations. We also show that applications exhibit low energy degradations (less than 1%) for both the instruction and data caches using our methodology.