Rene Celis-Cordova, A. Orlov, G. Snider, Tian Lu, J. Kulick
{"title":"Adiabatic Flip-Flop and SRAM Design for an Adiabatic Reversible Microprocessor","authors":"Rene Celis-Cordova, A. Orlov, G. Snider, Tian Lu, J. Kulick","doi":"10.1109/ICRC2020.2020.00005","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00005","url":null,"abstract":"Adiabatic reversible computing is a well-developed implementation for future energy-efficient computing that reduces heat generation by introducing a tradeoff between energy and speed. By using reversible logic and switching the circuits slowly, relative to their RC time constants, energy can be recovered, and dissipation can be dramatically reduced. Adiabatic microprocessors contain a large number of sequential elements, such as flip-flops and SRAM cells, that generally do not lend themselves to energy recovery. In this paper we present the design of an adiabatic flip-flop and an adiabatic SRAM cell that perform energy recovery. The adiabatic flip-flop performs partial energy recovery by combining a reversible master latch and an irreversible follower latch. The adiabatic SRAM cell performs energy recovery before new data is written by adding select transistors into the power lines of the cell. These proposed sequential elements are designed in 90 nm technology and their simulations prove to have a lower energy dissipation than their CMOS counterpart. A 16-bit MIPS reversible microprocessor is presented demonstrating the large-scale integration of both the adiabatic flip-flop and the adiabatic SRAM proposed in this work.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"355 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132691370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Tucker-1 Boolean Tensor Factorization with Quantum Annealers","authors":"D. O’Malley, H. Djidjev, B. Alexandrov","doi":"10.1109/ICRC2020.2020.00002","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00002","url":null,"abstract":"Quantum annealers are an emerging computational architecture that have the potential to address some challenging computational issues that will be left unresolved as we approach the end of the Moore’s Law era of computing. D-Wave quantum annealers are designed to solve a challenging set of problems – quadratic unconstrained binary optimization problems. This makes them a natural fit for solving problems with binary or Boolean variables. Here, we explore the use of a quantum annealer to solve Boolean tensor factorization. The goal of Boolean tensor factorization is to represent a high-dimensional tensor filled with Boolean values as a product of Boolean matrices and a Boolean core tensor. We show that a particular Boolean tensor factorization problem (called Tucker-1 factorization) can be decomposed into a sequence of quadratic unconstrained binary optimization problems that can be solved with a D-Wave 2000Q quantum annealer. While quantum annealers specifically and quantum computers in general are at a fairly early stage in their development, they are currently capable of solving these Boolean tensor factorization problems. Our results show that for fairly small tensors, we are frequently able to obtain an accurate (sometimes exact) factorization using quantum annealing.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130136131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Title page]","authors":"","doi":"10.1109/icrc51095.2020.00001","DOIUrl":"https://doi.org/10.1109/icrc51095.2020.00001","url":null,"abstract":"","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130884858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icrc51095.2020.00003","DOIUrl":"https://doi.org/10.1109/icrc51095.2020.00003","url":null,"abstract":"","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131577121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruomin Zhu, Joel Hochstetter, Alon Loeffler, A. Diaz-Alvarez, A. Stieg, J. Gimzewski, T. Nakayama, Z. Kuncic
{"title":"Harnessing adaptive dynamics in neuro-memristive nanowire networks for transfer learning","authors":"Ruomin Zhu, Joel Hochstetter, Alon Loeffler, A. Diaz-Alvarez, A. Stieg, J. Gimzewski, T. Nakayama, Z. Kuncic","doi":"10.1109/ICRC2020.2020.00007","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00007","url":null,"abstract":"Nanowire networks (NWNs) represent a unique hardware platform for neuromorphic information processing. In addition to exhibiting synapse-like resistive switching memory at their cross-point junctions, their self-assembly confers a neural network-like topology to their electrical circuitry, something that is impossible to achieve through conventional top-down fabrication approaches. In addition to their low power requirements, cost effectiveness and efficient interconnects, neuromorphic NWNs are also fault-tolerant and self-healing. These highly attractive properties can be largely attributed to their complex network connectivity, which enables a rich repertoire of adaptive nonlinear dynamics, including edge-of-chaos criticality. Here, we show how the adaptive dynamics intrinsic to neuromorphic NWNs can be harnessed to achieve transfer learning. We demonstrate this through simulations of a reservoir computing implementation in which NWNs perform the well-known benchmarking task of Mackey-Glass (MG) signal forecasting. First we show how NWNs can predict MG signals with arbitrary degrees of unpredictability (i.e. chaos). We then show that NWNs pre-exposed to a MG signal perform better in forecasting than NWNs without prior experience of an MG signal. This type of transfer learning is enabled by the network’s collective memory of previous states. Overall, their adaptive signal processing capabilities make neuromorphic NWNs promising candidates for emerging real-time applications in IoT devices in particular, at the far edge.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"129 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132709661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Advanced unembedding techniques for quantum annealers","authors":"Elijah Pelofske, Georg Hahn, H. Djidjev","doi":"10.1109/ICRC2020.2020.00001","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00001","url":null,"abstract":"The D-Wave quantum annealers make it possible to obtain high quality solutions of NP-hard problems by mapping a problem in a QUBO (quadratic unconstrained binary optimization) or Ising form to the physical qubit connectivity structure on the D-Wave chip. However, the latter is restricted in that only a fraction of all pairwise couplers between physical qubits exists. Modeling the connectivity structure of a given problem instance thus necessitates the computation of a minor embedding of the variables in the problem specification onto the logical qubits, which consist of several physical qubits \"chained\" together to act as a logical one. After annealing, it is however not guaranteed that all chained qubits get the same value (−1 or +1 for an Ising model, and 0 or 1 for a QUBO), and several approaches exist to assign a final value to each logical qubit (a process called \"unembedding\"). In this work, we present tailored unembedding techniques for four important NP-hard problems: the Maximum Clique, Maximum Cut, Minimum Vertex Cover, and Graph Partitioning problems. Our techniques are simple and yet make use of structural properties of the problem being solved. Using Erdős-Rényi random graphs as inputs, we compare our unembedding techniques to three popular ones (majority vote, random weighting, and minimize energy). We demonstrate that our proposed algorithms outperform the currently available ones in that they yield solutions of better quality, while being computationally equally efficient.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129720571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Prasanna Date, C. Carothers, J. Mitchell, J. Hendler, M. Magdon-Ismail
{"title":"Training Deep Neural Networks with Constrained Learning Parameters","authors":"Prasanna Date, C. Carothers, J. Mitchell, J. Hendler, M. Magdon-Ismail","doi":"10.1109/ICRC2020.2020.00018","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00018","url":null,"abstract":"Today's deep learning models are primarily trained on CPUs and GPUs. Although these models tend to have low error, they consume high power and utilize large amount of memory owing to double precision floating point learning parameters. Beyond the Moore's law, a significant portion of deep learning tasks would run on edge computing systems, which will form an indispensable part of the entire computation fabric. Subsequently, training deep learning models for such systems will have to be tailored and adopted to generate models that have the following desirable characteristics: low error, low memory, and low power. We believe that deep neural networks (DNNs), where learning parameters are constrained to have a set of finite discrete values, running on neuromorphic computing systems would be instrumental for intelligent edge computing systems having these desirable characteristics. To this extent, we propose the Combinatorial Neural Network Training Algorithm (CoNNTrA), that leverages a coordinate gradient descent-based approach for training deep learning models with finite discrete learning parameters. Next, we elaborate on the theoretical underpinnings and evaluate the computational complexity of CoNNTrA. As a proof of concept, we use CoNNTrA to train deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets and compare their performance to the same models trained using Backpropagation. We use following performance metrics for the comparison: (i) Training error; (ii) Validation error; (iii) Memory usage; and (iv) Training time. Our results indicate that CoNNTTA models use 32 × less memory and have errors at par with the Backpropagation models.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125232049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Frank, R. Brocato, B. Tierney, N. Missert, Alexander H. Hsia
{"title":"Reversible Computing with Fast, Fully Static, Fully Adiabatic CMOS","authors":"M. Frank, R. Brocato, B. Tierney, N. Missert, Alexander H. Hsia","doi":"10.1109/ICRC2020.2020.00014","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00014","url":null,"abstract":"To advance the energy efficiency of general digital computing far beyond the thermodynamic limits that apply to conventional digital circuits will require utilizing the principles of reversible computing. It has been known since the early 1990s that reversible computing based on adiabatic switching is possible in CMOS, although almost all the \"adiabatic\" CMOS logic families in the literature are not actually fully adiabatic, which limits their achievable energy savings. The first CMOS logic style achieving truly, fully adiabatic operation if leakage was negligible (CRL) was not fully static, which led to practical engineering difficulties in the presence of certain nonidealities. Later, \"static\" adiabatic logic families were described, but they were not actually fully adiabatic, or fully static, and were much slower.In this paper, we describe a new logic family, Static 2-Level Adiabatic Logic (S2LAL), which is, to our knowledge, the first CMOS logic family that is both fully static, and truly, fully adiabatic (modulo leakage). In addition, S2LAL is, we think, the fastest possible such family (among fully pipelined sequential circuits), having a latency per logic stage of one tick (transition time), and a minimum clock period (initiation interval) of 8 ticks. S2LAL requires 8 phases of a trapezoidal power-clock waveform (plus constant power and ground references) to be supplied. We argue that, if implemented in a suitable fabrication process designed to aggressively minimize leakage, S2LAL should be capable of demonstrating a greater level of energy efficiency than any other semiconductor-based digital logic family known today.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131686123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christoph Roch, Alexander Impertro, Thomy Phan, Thomas Gabor, Sebastian Feld, Claudia Linnhoff-Popien
{"title":"Cross Entropy Hyperparameter Optimization for Constrained Problem Hamiltonians Applied to QAOA","authors":"Christoph Roch, Alexander Impertro, Thomy Phan, Thomas Gabor, Sebastian Feld, Claudia Linnhoff-Popien","doi":"10.1109/ICRC2020.2020.00009","DOIUrl":"https://doi.org/10.1109/ICRC2020.2020.00009","url":null,"abstract":"Hybrid quantum-classical algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) are considered as one of the most encouraging approaches for taking advantage of near-term quantum computers in practical applications. Such algorithms are usually implemented in a variational form, combining a classical optimization method with a quantum machine to find good solutions to an optimization problem. The solution quality of QAOA depends to a high degree on the parameters chosen by the classical optimizer at each iteration. However, the solution landscape of those parameters is highly multi-dimensional and contains many low-quality local optima. In this study we apply a Cross-Entropy method to shape this landscape, which allows the classical optimizer to find better parameter more easily and hence results in an improved performance. We empirically demonstrate that this approach can reach a significant better solution quality for the Knapsack Problem.","PeriodicalId":320580,"journal":{"name":"2020 International Conference on Rebooting Computing (ICRC)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133485929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}