A Deep Learning Approach for Volterra Kernel Extraction for Time Domain Simulation of Weakly Nonlinear Circuits

Thong Nguyen, Xinying Wang, Xu Chen, J. Schutt-Ainé
{"title":"A Deep Learning Approach for Volterra Kernel Extraction for Time Domain Simulation of Weakly Nonlinear Circuits","authors":"Thong Nguyen, Xinying Wang, Xu Chen, J. Schutt-Ainé","doi":"10.1109/ECTC.2019.00291","DOIUrl":null,"url":null,"abstract":"Volterra kernels are well known to be the multidimensional extension of the impulse response of a linear time invariant (LTI) system. It can be used to accurately model weakly nonlinear, specifically, polynomial nonlinearity systems. It has been used in the past for white-box model order reduction (MOR) to model frequency-domain performance metric quantities such as distortion in power amplifiers (PA). In this paper, we train a neural network from time-domain response of high-speed link buffers to extract multiple high-order kernels at once. Once the kernels are extracted, they can fully characterize the dynamics of the buffers of interest. Using the kernels, we demonstrate that time-domain response is straight-forward to obtain using super-, or multi-dimensional convolution. Previous work has used a shallow feed-forward neural network to train the system by using Gaussian noise as the identification signal. This is not convenient for the method to be compatible with existing computer-aided design tools. In this work, we directly use a pseudo random bit sequence (PRBS) to train the network. The proposed technique is more challenging because the PRBS has flat regions which have highly rich frequency spectrum and requires longer memory length, but allows the method to be compatible with existing simulation programs. We investigate different topologies including feed-forward neural network and recurrent neural network. Comparisons between training phase, inference phase, convergence are presented using different neural network topologies. The paper presents a numerical example using a 28Gbps data rate PAM4 transceiver to validate the proposed method against traditional simulation methods such as IBIS or SPICE level simulation for comparison in speed and accuracy. Using Volterra kernels promises a novel way to perform accurate nonlinear circuit simulation in the LTI system framework which is already well known and well developed. It can be conveniently incorporated into existing EDA frameworks.","PeriodicalId":6726,"journal":{"name":"2019 IEEE 69th Electronic Components and Technology Conference (ECTC)","volume":"403 1","pages":"1889-1896"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 IEEE 69th Electronic Components and Technology Conference (ECTC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ECTC.2019.00291","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 6

Abstract

Volterra kernels are well known to be the multidimensional extension of the impulse response of a linear time invariant (LTI) system. It can be used to accurately model weakly nonlinear, specifically, polynomial nonlinearity systems. It has been used in the past for white-box model order reduction (MOR) to model frequency-domain performance metric quantities such as distortion in power amplifiers (PA). In this paper, we train a neural network from time-domain response of high-speed link buffers to extract multiple high-order kernels at once. Once the kernels are extracted, they can fully characterize the dynamics of the buffers of interest. Using the kernels, we demonstrate that time-domain response is straight-forward to obtain using super-, or multi-dimensional convolution. Previous work has used a shallow feed-forward neural network to train the system by using Gaussian noise as the identification signal. This is not convenient for the method to be compatible with existing computer-aided design tools. In this work, we directly use a pseudo random bit sequence (PRBS) to train the network. The proposed technique is more challenging because the PRBS has flat regions which have highly rich frequency spectrum and requires longer memory length, but allows the method to be compatible with existing simulation programs. We investigate different topologies including feed-forward neural network and recurrent neural network. Comparisons between training phase, inference phase, convergence are presented using different neural network topologies. The paper presents a numerical example using a 28Gbps data rate PAM4 transceiver to validate the proposed method against traditional simulation methods such as IBIS or SPICE level simulation for comparison in speed and accuracy. Using Volterra kernels promises a novel way to perform accurate nonlinear circuit simulation in the LTI system framework which is already well known and well developed. It can be conveniently incorporated into existing EDA frameworks.
弱非线性电路时域仿真中Volterra核提取的深度学习方法
众所周知,Volterra核是线性时不变(LTI)系统脉冲响应的多维扩展。它可以用来精确地模拟弱非线性,特别是多项式非线性系统。过去,它已被用于白盒模型降阶(MOR)来模拟频域性能度量,如功率放大器(PA)中的失真。本文利用高速链路缓冲区的时域响应训练神经网络,一次提取多个高阶核。一旦提取了核,它们就可以完全表征感兴趣的缓冲区的动态。使用核,我们证明了时域响应是直接获得使用超,或多维卷积。以前的工作是利用高斯噪声作为识别信号,利用浅前馈神经网络对系统进行训练。这不利于该方法与现有的计算机辅助设计工具兼容。在这项工作中,我们直接使用伪随机比特序列(PRBS)来训练网络。提出的技术更具挑战性,因为PRBS具有具有高度丰富频谱的平坦区域,需要更长的内存长度,但允许该方法与现有的仿真程序兼容。我们研究了不同的拓扑结构,包括前馈神经网络和循环神经网络。用不同的神经网络拓扑结构对训练阶段、推理阶段和收敛阶段进行了比较。本文给出了一个28Gbps数据速率PAM4收发器的数值示例,将所提出的方法与传统的仿真方法(如IBIS或SPICE级仿真)进行速度和精度的比较。使用Volterra核有望在LTI系统框架中执行精确的非线性电路仿真的新方法,这已经广为人知并得到了很好的发展。它可以方便地合并到现有的EDA框架中。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信