{"title":"Lessons and Insights from Super-Resolution of Energy Data","authors":"Rithwik Kukunuri, Nipun Batra, Hongning Wang","doi":"10.1145/3371158.3371224","DOIUrl":null,"url":null,"abstract":"Motivation Studies have shown that consumers of electricity can save up 15% of their bills when provided with a detailed appliance wise feedback [1]. Energy super-resolution refers to estimating energy usage at a higher-sampling rate from the lower sampling rate. We mainly focus on predicting the hourly reading of a home, using the daily usage (which can be noted down by the users from the meter). This predicted usage can be used by the consumers to identify the times of the day, which are contributing more to electricity usage and help them optimize their usage. This is analogous to image superresolution, where the zooming out factor equals 24. Problem definition Throughout the paper we will be using the following notation: H Number of homes; D Number of days; X ∈ RH×D Denotes low resolution matrix (Aggregate); Y ∈ RH×D×24 Denotes high resolution matrix; P ∈ RH×D×24 Denotes weights matrix; Weights matrix is same as the matrix which stores the proportion of electricity consumed on a particular day. For the hth home and the dth day, the matrix ∀iPh,d,i = Yh,d,i Xh,d Approach Triplet learning Let L(i, j) denoteX [i, j −K : j +K] , which is a vector of length 2K+1. It stores the K past and K future neighbor aggregate readings in a home i during day j. We can refer to this a neighborhood vector for the ith home for jth day. An embedding network takes 2K+1 dimension vector as input and outputs an vector of dimensionN . The embedding network can be configured with various options such as normalization of output and positive activation of output.Consider (i,x),(j,y),(k, z), where each tuple denotes a home and day pairs. Let V (i,x) denote the embedding vector generated using L(i,x) . We define similarity functions which are specified in Equation(1). The functions in Equation(1) denote the similarity of the given tuples in the super-resolution usage. The losses in Table 1 ensure that tuples that are similar in the weights space are also similar in the embedding space. After the embedding network finished training, we generate the embeddings for each of the test samples. Then we find k nearest training samples using the embeddings and use the weights of the closest samples as the weights for the test sample.","PeriodicalId":360747,"journal":{"name":"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 7th ACM IKDD CoDS and 25th COMAD","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3371158.3371224","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Motivation Studies have shown that consumers of electricity can save up 15% of their bills when provided with a detailed appliance wise feedback [1]. Energy super-resolution refers to estimating energy usage at a higher-sampling rate from the lower sampling rate. We mainly focus on predicting the hourly reading of a home, using the daily usage (which can be noted down by the users from the meter). This predicted usage can be used by the consumers to identify the times of the day, which are contributing more to electricity usage and help them optimize their usage. This is analogous to image superresolution, where the zooming out factor equals 24. Problem definition Throughout the paper we will be using the following notation: H Number of homes; D Number of days; X ∈ RH×D Denotes low resolution matrix (Aggregate); Y ∈ RH×D×24 Denotes high resolution matrix; P ∈ RH×D×24 Denotes weights matrix; Weights matrix is same as the matrix which stores the proportion of electricity consumed on a particular day. For the hth home and the dth day, the matrix ∀iPh,d,i = Yh,d,i Xh,d Approach Triplet learning Let L(i, j) denoteX [i, j −K : j +K] , which is a vector of length 2K+1. It stores the K past and K future neighbor aggregate readings in a home i during day j. We can refer to this a neighborhood vector for the ith home for jth day. An embedding network takes 2K+1 dimension vector as input and outputs an vector of dimensionN . The embedding network can be configured with various options such as normalization of output and positive activation of output.Consider (i,x),(j,y),(k, z), where each tuple denotes a home and day pairs. Let V (i,x) denote the embedding vector generated using L(i,x) . We define similarity functions which are specified in Equation(1). The functions in Equation(1) denote the similarity of the given tuples in the super-resolution usage. The losses in Table 1 ensure that tuples that are similar in the weights space are also similar in the embedding space. After the embedding network finished training, we generate the embeddings for each of the test samples. Then we find k nearest training samples using the embeddings and use the weights of the closest samples as the weights for the test sample.