Large Dimension Parameterization with Convolutional Variational Autoencoder: An Application in the History Matching of Channelized Geological Facies Models
Júlia Potratz, S. A. Canchumuni, José David Bermudez Castro, A. Emerick, M. Pacheco
{"title":"Large Dimension Parameterization with Convolutional Variational Autoencoder: An Application in the History Matching of Channelized Geological Facies Models","authors":"Júlia Potratz, S. A. Canchumuni, José David Bermudez Castro, A. Emerick, M. Pacheco","doi":"10.1109/ICCSA50381.2020.00016","DOIUrl":null,"url":null,"abstract":"History matching is the problem of assimilating dynamic data in numerical models of oil and gas reservoirs. Among the methods available in the literature, the iterative ensemble smothers are often used in practice. However, these methods assume that all variables are Gaussian, which limits their application in a problem where the objective is to update the distribution of rock types (facies) in the model. In fact, updating models of geological facies using dynamic data is still an open issue in the oil industry. The problem relies on the development of a parametrical model able to preserve the geological realism of the models. In this context, parameterization techniques based on deep learning, such as convolutional variational autoencoders network (CVAE), have shown promising results in this area when combined with ensemble smothers. Nevertheless, these types of networks present difficulties of scalability for large-sized reservoir models, because as the input dimension increases, the number of network parameters increases exponentially. This work addresses this problem by introducing two new CVAE-based network architectures that can be used for modeling large-scale reservoir models. The first proposed network incorporates the “depthwise separable convolution” in its design, while the second introduces the “inception module”. Results show a considerable reduction of trainable parameters for the first network, while, for the second one, the number becomes invariant to the input dimension.","PeriodicalId":124171,"journal":{"name":"2020 20th International Conference on Computational Science and Its Applications (ICCSA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 20th International Conference on Computational Science and Its Applications (ICCSA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCSA50381.2020.00016","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
History matching is the problem of assimilating dynamic data in numerical models of oil and gas reservoirs. Among the methods available in the literature, the iterative ensemble smothers are often used in practice. However, these methods assume that all variables are Gaussian, which limits their application in a problem where the objective is to update the distribution of rock types (facies) in the model. In fact, updating models of geological facies using dynamic data is still an open issue in the oil industry. The problem relies on the development of a parametrical model able to preserve the geological realism of the models. In this context, parameterization techniques based on deep learning, such as convolutional variational autoencoders network (CVAE), have shown promising results in this area when combined with ensemble smothers. Nevertheless, these types of networks present difficulties of scalability for large-sized reservoir models, because as the input dimension increases, the number of network parameters increases exponentially. This work addresses this problem by introducing two new CVAE-based network architectures that can be used for modeling large-scale reservoir models. The first proposed network incorporates the “depthwise separable convolution” in its design, while the second introduces the “inception module”. Results show a considerable reduction of trainable parameters for the first network, while, for the second one, the number becomes invariant to the input dimension.