Thong Nguyen, Xinying Wang, Xu Chen, J. Schutt-Ainé
{"title":"A Deep Learning Approach for Volterra Kernel Extraction for Time Domain Simulation of Weakly Nonlinear Circuits","authors":"Thong Nguyen, Xinying Wang, Xu Chen, J. Schutt-Ainé","doi":"10.1109/ECTC.2019.00291","DOIUrl":null,"url":null,"abstract":"Volterra kernels are well known to be the multidimensional extension of the impulse response of a linear time invariant (LTI) system. It can be used to accurately model weakly nonlinear, specifically, polynomial nonlinearity systems. It has been used in the past for white-box model order reduction (MOR) to model frequency-domain performance metric quantities such as distortion in power amplifiers (PA). In this paper, we train a neural network from time-domain response of high-speed link buffers to extract multiple high-order kernels at once. Once the kernels are extracted, they can fully characterize the dynamics of the buffers of interest. Using the kernels, we demonstrate that time-domain response is straight-forward to obtain using super-, or multi-dimensional convolution. Previous work has used a shallow feed-forward neural network to train the system by using Gaussian noise as the identification signal. This is not convenient for the method to be compatible with existing computer-aided design tools. In this work, we directly use a pseudo random bit sequence (PRBS) to train the network. The proposed technique is more challenging because the PRBS has flat regions which have highly rich frequency spectrum and requires longer memory length, but allows the method to be compatible with existing simulation programs. We investigate different topologies including feed-forward neural network and recurrent neural network. Comparisons between training phase, inference phase, convergence are presented using different neural network topologies. The paper presents a numerical example using a 28Gbps data rate PAM4 transceiver to validate the proposed method against traditional simulation methods such as IBIS or SPICE level simulation for comparison in speed and accuracy. Using Volterra kernels promises a novel way to perform accurate nonlinear circuit simulation in the LTI system framework which is already well known and well developed. It can be conveniently incorporated into existing EDA frameworks.","PeriodicalId":6726,"journal":{"name":"2019 IEEE 69th Electronic Components and Technology Conference (ECTC)","volume":"403 1","pages":"1889-1896"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 69th Electronic Components and Technology Conference (ECTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECTC.2019.00291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6
Abstract
Volterra kernels are well known to be the multidimensional extension of the impulse response of a linear time invariant (LTI) system. It can be used to accurately model weakly nonlinear, specifically, polynomial nonlinearity systems. It has been used in the past for white-box model order reduction (MOR) to model frequency-domain performance metric quantities such as distortion in power amplifiers (PA). In this paper, we train a neural network from time-domain response of high-speed link buffers to extract multiple high-order kernels at once. Once the kernels are extracted, they can fully characterize the dynamics of the buffers of interest. Using the kernels, we demonstrate that time-domain response is straight-forward to obtain using super-, or multi-dimensional convolution. Previous work has used a shallow feed-forward neural network to train the system by using Gaussian noise as the identification signal. This is not convenient for the method to be compatible with existing computer-aided design tools. In this work, we directly use a pseudo random bit sequence (PRBS) to train the network. The proposed technique is more challenging because the PRBS has flat regions which have highly rich frequency spectrum and requires longer memory length, but allows the method to be compatible with existing simulation programs. We investigate different topologies including feed-forward neural network and recurrent neural network. Comparisons between training phase, inference phase, convergence are presented using different neural network topologies. The paper presents a numerical example using a 28Gbps data rate PAM4 transceiver to validate the proposed method against traditional simulation methods such as IBIS or SPICE level simulation for comparison in speed and accuracy. Using Volterra kernels promises a novel way to perform accurate nonlinear circuit simulation in the LTI system framework which is already well known and well developed. It can be conveniently incorporated into existing EDA frameworks.