Koki Ishida, Masamitsu Tanaka, Takatsugu Ono, Koji Inoue
{"title":"Single-flux-quantum cache memory architecture","authors":"Koki Ishida, Masamitsu Tanaka, Takatsugu Ono, Koji Inoue","doi":"10.1109/ISOCC.2016.7799755","DOIUrl":null,"url":null,"abstract":"Single-flux-quantum (SFQ) logic is promising technology to realize an incredible microprocessor which operates over 100 GHz due to its ultra-fast-speed and ultra-low-power natures. Although previous work has demonstrated prototype of an SFQ microprocessor, the SFQ based L1 cache memory has not well optimized: a large access latency and strictly limited scalability. This paper proposes a novel SFQ cache architecture to support fast accesses. The sub-arrayed structure applied to the cache produces better scalability in terms of capacity. Evaluation results show that the proposed cache achieves 1.8X fast access speed.","PeriodicalId":278207,"journal":{"name":"2016 International SoC Design Conference (ISOCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-12-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 International SoC Design Conference (ISOCC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISOCC.2016.7799755","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
Single-flux-quantum (SFQ) logic is promising technology to realize an incredible microprocessor which operates over 100 GHz due to its ultra-fast-speed and ultra-low-power natures. Although previous work has demonstrated prototype of an SFQ microprocessor, the SFQ based L1 cache memory has not well optimized: a large access latency and strictly limited scalability. This paper proposes a novel SFQ cache architecture to support fast accesses. The sub-arrayed structure applied to the cache produces better scalability in terms of capacity. Evaluation results show that the proposed cache achieves 1.8X fast access speed.