{"title":"MAG-Vision: A Vision Transformer Backbone for Magnetic Material Modeling","authors":"Rui Zhang;Lei Shen","doi":"10.1109/TMAG.2025.3527486","DOIUrl":null,"url":null,"abstract":"The neural network-based method for modeling magnetic materials enables the estimation of hysteresis B-H loop and core loss across a wide operation range. Transformers are neural networks widely used in sequence-to-sequence tasks. The classical Transformer modeling method suffers from high per-layer complexity and long recurrent inference time when dealing with long sequences. While down-sampling methods can mitigate these issues, they often sacrifice modeling accuracy. In this study, we propose MAG-Vision, which employs a vision Transformer (ViT) as the backbone for magnetic material modeling. It can shorten waveform sequences with minimal loss of information. We trained the network using the open-source magnetic core loss dataset MagNet. Experimental results demonstrate that MAG-Vision performs well in estimating hysteresis B-H loop and magnetic core losses. The average relative error of magnetic core losses for most materials is less than 2%. Experiments are designed to compare MAG-Vision with different network structures to validate its advantages in accuracy, training speed, and inference time.","PeriodicalId":13405,"journal":{"name":"IEEE Transactions on Magnetics","volume":"61 3","pages":"1-6"},"PeriodicalIF":2.1000,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Transactions on Magnetics","FirstCategoryId":"5","ListUrlMain":"https://ieeexplore.ieee.org/document/10836152/","RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
The neural network-based method for modeling magnetic materials enables the estimation of hysteresis B-H loop and core loss across a wide operation range. Transformers are neural networks widely used in sequence-to-sequence tasks. The classical Transformer modeling method suffers from high per-layer complexity and long recurrent inference time when dealing with long sequences. While down-sampling methods can mitigate these issues, they often sacrifice modeling accuracy. In this study, we propose MAG-Vision, which employs a vision Transformer (ViT) as the backbone for magnetic material modeling. It can shorten waveform sequences with minimal loss of information. We trained the network using the open-source magnetic core loss dataset MagNet. Experimental results demonstrate that MAG-Vision performs well in estimating hysteresis B-H loop and magnetic core losses. The average relative error of magnetic core losses for most materials is less than 2%. Experiments are designed to compare MAG-Vision with different network structures to validate its advantages in accuracy, training speed, and inference time.
期刊介绍:
Science and technology related to the basic physics and engineering of magnetism, magnetic materials, applied magnetics, magnetic devices, and magnetic data storage. The IEEE Transactions on Magnetics publishes scholarly articles of archival value as well as tutorial expositions and critical reviews of classical subjects and topics of current interest.