S. Sykiotis, Maria Kaselimi, A. Doulamis, N. Doulamis
{"title":"一种高效的深层双向变压器能量分解模型","authors":"S. Sykiotis, Maria Kaselimi, A. Doulamis, N. Doulamis","doi":"10.23919/eusipco55093.2022.9909768","DOIUrl":null,"url":null,"abstract":"In this study, we present TransformNILM, a novel Transformer based model for Non-Intrusive Load Monitoring (NILM). To infer the consumption signal of household appliances, TransformNILM employs Transformer layers, which utilize attention mechanisms to successfully draw global dependencies between input and output sequences. Trans-formNILM does not require data balancing and operates with minimal dataset pre-processing. Compared to other Transformer-based architectures, TransformNILM instigates an efficient training scheme, where model training consists of unsupervised pre-training and supervised model fine-tuning, thus leading to decreased training time and improved predictive performance. Experimental results validate Trans-formNILM's superiority compared to several state of the art methods.","PeriodicalId":231263,"journal":{"name":"2022 30th European Signal Processing Conference (EUSIPCO)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An Efficient Deep Bidirectional Transformer Model for Energy Disaggregation\",\"authors\":\"S. Sykiotis, Maria Kaselimi, A. Doulamis, N. Doulamis\",\"doi\":\"10.23919/eusipco55093.2022.9909768\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, we present TransformNILM, a novel Transformer based model for Non-Intrusive Load Monitoring (NILM). To infer the consumption signal of household appliances, TransformNILM employs Transformer layers, which utilize attention mechanisms to successfully draw global dependencies between input and output sequences. Trans-formNILM does not require data balancing and operates with minimal dataset pre-processing. Compared to other Transformer-based architectures, TransformNILM instigates an efficient training scheme, where model training consists of unsupervised pre-training and supervised model fine-tuning, thus leading to decreased training time and improved predictive performance. Experimental results validate Trans-formNILM's superiority compared to several state of the art methods.\",\"PeriodicalId\":231263,\"journal\":{\"name\":\"2022 30th European Signal Processing Conference (EUSIPCO)\",\"volume\":\"9 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 30th European Signal Processing Conference (EUSIPCO)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/eusipco55093.2022.9909768\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 30th European Signal Processing Conference (EUSIPCO)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/eusipco55093.2022.9909768","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An Efficient Deep Bidirectional Transformer Model for Energy Disaggregation
In this study, we present TransformNILM, a novel Transformer based model for Non-Intrusive Load Monitoring (NILM). To infer the consumption signal of household appliances, TransformNILM employs Transformer layers, which utilize attention mechanisms to successfully draw global dependencies between input and output sequences. Trans-formNILM does not require data balancing and operates with minimal dataset pre-processing. Compared to other Transformer-based architectures, TransformNILM instigates an efficient training scheme, where model training consists of unsupervised pre-training and supervised model fine-tuning, thus leading to decreased training time and improved predictive performance. Experimental results validate Trans-formNILM's superiority compared to several state of the art methods.