{"title":"优化并行逻辑仿真的机器学习方法","authors":"S. Meraji, C. Tropper","doi":"10.1109/ICPP.2010.62","DOIUrl":null,"url":null,"abstract":"Parallel discrete event simulation can be applied as a fast and cost effective approach for the gate level simulation of current VLSI circuits. In this paper we combine a dynamic load balancing algorithm and a bounded window algorithm for optimistic gate level simulation. The bounded time window prevents the simulation from being too optimistic and from excessive rollbacks. We utilize a machine learning algorithm (Qlearning) to effect this combination. We introduce two dynamic load-balancing algorithms for balancing the communication and computational load and use two learning agents to combine these algorithms. One learning agent combines the two learning algorithms and learns their corresponding parameters, while the second optimizes the value of the time window. Experimental results show up to a 46% improvement in the simulation time using this combined algorithm for several open source circuits. To the best of our knowledge, this is the first time that Q-learning has been used to optimize an optimistic gate level simulation.","PeriodicalId":180554,"journal":{"name":"2010 39th International Conference on Parallel Processing","volume":"22 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2010-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"A Machine Learning Approach for Optimizing Parallel Logic Simulation\",\"authors\":\"S. Meraji, C. Tropper\",\"doi\":\"10.1109/ICPP.2010.62\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Parallel discrete event simulation can be applied as a fast and cost effective approach for the gate level simulation of current VLSI circuits. In this paper we combine a dynamic load balancing algorithm and a bounded window algorithm for optimistic gate level simulation. The bounded time window prevents the simulation from being too optimistic and from excessive rollbacks. We utilize a machine learning algorithm (Qlearning) to effect this combination. We introduce two dynamic load-balancing algorithms for balancing the communication and computational load and use two learning agents to combine these algorithms. One learning agent combines the two learning algorithms and learns their corresponding parameters, while the second optimizes the value of the time window. Experimental results show up to a 46% improvement in the simulation time using this combined algorithm for several open source circuits. To the best of our knowledge, this is the first time that Q-learning has been used to optimize an optimistic gate level simulation.\",\"PeriodicalId\":180554,\"journal\":{\"name\":\"2010 39th International Conference on Parallel Processing\",\"volume\":\"22 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2010-09-13\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2010 39th International Conference on Parallel Processing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICPP.2010.62\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2010 39th International Conference on Parallel Processing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICPP.2010.62","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Machine Learning Approach for Optimizing Parallel Logic Simulation
Parallel discrete event simulation can be applied as a fast and cost effective approach for the gate level simulation of current VLSI circuits. In this paper we combine a dynamic load balancing algorithm and a bounded window algorithm for optimistic gate level simulation. The bounded time window prevents the simulation from being too optimistic and from excessive rollbacks. We utilize a machine learning algorithm (Qlearning) to effect this combination. We introduce two dynamic load-balancing algorithms for balancing the communication and computational load and use two learning agents to combine these algorithms. One learning agent combines the two learning algorithms and learns their corresponding parameters, while the second optimizes the value of the time window. Experimental results show up to a 46% improvement in the simulation time using this combined algorithm for several open source circuits. To the best of our knowledge, this is the first time that Q-learning has been used to optimize an optimistic gate level simulation.