Haoran Li, Diego Serrano, Shukai Wang, T. Guillod, Min Luo, Minjie Chen
{"title":"Predicting the B-H Loops of Power Magnetics with Transformer-based Encoder-Projector-Decoder Neural Network Architecture","authors":"Haoran Li, Diego Serrano, Shukai Wang, T. Guillod, Min Luo, Minjie Chen","doi":"10.1109/APEC43580.2023.10131497","DOIUrl":null,"url":null,"abstract":"This paper presents a transformer-based encoder-projector-decoder neural network architecture for modeling power magnetics B-H hysteresis loops. The transformer-based encoder-decoder network architecture maps a flux density excitation waveform (B) into the corresponding magnetic field strength (H) waveform. The predicted B-H loop can be used to estimate the core loss and support magnetics-in-circuit simulations. A projector is added between the transformer encoder and decoder to capture the impact of other inputs such as frequency, temperature, and dc bias. An example transformer neural network is designed, trained, and tested to prove the effectiveness of the proposed architecture.","PeriodicalId":151216,"journal":{"name":"2023 IEEE Applied Power Electronics Conference and Exposition (APEC)","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2023-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Applied Power Electronics Conference and Exposition (APEC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/APEC43580.2023.10131497","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper presents a transformer-based encoder-projector-decoder neural network architecture for modeling power magnetics B-H hysteresis loops. The transformer-based encoder-decoder network architecture maps a flux density excitation waveform (B) into the corresponding magnetic field strength (H) waveform. The predicted B-H loop can be used to estimate the core loss and support magnetics-in-circuit simulations. A projector is added between the transformer encoder and decoder to capture the impact of other inputs such as frequency, temperature, and dc bias. An example transformer neural network is designed, trained, and tested to prove the effectiveness of the proposed architecture.