{"title":"结合TTL-FIFO算法的原位缓存","authors":"Gasydech Lergchinnaboot, Peraphon Sophatsathit, Saranya Maneeroj","doi":"10.1109/ICCC47050.2019.9064459","DOIUrl":null,"url":null,"abstract":"This research proposes an algorithmic cache arrangement scheme to efficiently utilize existing hardware that are currently plagued with memory wall problem. The proposed scheme exploits straightforwardness of First-in, First-out (FIFO) scheduling algorithm and in situ placement technique. FIFO allows the proposed scheme a fair caching of processes. In situ replacement economically utilizes spaces by replacing the expired process with a new process in the same memory space without flushing. This combination helps reduce operating overheads, which in turn lower power consumption. The benefits of their simplicity and hardware implementable will accelerate operational speed that eventually closes the gap between processing speed and memory access/retrieval speed, thereby lessens the memory wall problem.","PeriodicalId":6739,"journal":{"name":"2019 IEEE 5th International Conference on Computer and Communications (ICCC)","volume":"41 1","pages":"437-441"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"In Situ Caching using Combined TTL-FIFO Algorithm\",\"authors\":\"Gasydech Lergchinnaboot, Peraphon Sophatsathit, Saranya Maneeroj\",\"doi\":\"10.1109/ICCC47050.2019.9064459\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This research proposes an algorithmic cache arrangement scheme to efficiently utilize existing hardware that are currently plagued with memory wall problem. The proposed scheme exploits straightforwardness of First-in, First-out (FIFO) scheduling algorithm and in situ placement technique. FIFO allows the proposed scheme a fair caching of processes. In situ replacement economically utilizes spaces by replacing the expired process with a new process in the same memory space without flushing. This combination helps reduce operating overheads, which in turn lower power consumption. The benefits of their simplicity and hardware implementable will accelerate operational speed that eventually closes the gap between processing speed and memory access/retrieval speed, thereby lessens the memory wall problem.\",\"PeriodicalId\":6739,\"journal\":{\"name\":\"2019 IEEE 5th International Conference on Computer and Communications (ICCC)\",\"volume\":\"41 1\",\"pages\":\"437-441\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 IEEE 5th International Conference on Computer and Communications (ICCC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCC47050.2019.9064459\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 5th International Conference on Computer and Communications (ICCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCC47050.2019.9064459","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This research proposes an algorithmic cache arrangement scheme to efficiently utilize existing hardware that are currently plagued with memory wall problem. The proposed scheme exploits straightforwardness of First-in, First-out (FIFO) scheduling algorithm and in situ placement technique. FIFO allows the proposed scheme a fair caching of processes. In situ replacement economically utilizes spaces by replacing the expired process with a new process in the same memory space without flushing. This combination helps reduce operating overheads, which in turn lower power consumption. The benefits of their simplicity and hardware implementable will accelerate operational speed that eventually closes the gap between processing speed and memory access/retrieval speed, thereby lessens the memory wall problem.