Transformer-Based Deep Learning Models for Well Log Processing and Quality Control by Modelling Global Dependence of the Complex Sequences

Ashutosh Kumar
{"title":"Transformer-Based Deep Learning Models for Well Log Processing and Quality Control by Modelling Global Dependence of the Complex Sequences","authors":"Ashutosh Kumar","doi":"10.2118/208109-ms","DOIUrl":null,"url":null,"abstract":"\n A single well from any mature field produces approximately 1.7 million Measurement While Drilling (MWD) data points. We either use cross-correlation and covariance measurement, or Long Short-Term Memory (LSTM) based Deep Learning algorithms to diagnose long sequences of extremely noisy data. LSTM's context size of 200 tokens barely accounts for the entire depth. Proposed work develops application of Transformer-based Deep Learning algorithm to diagnose and predict events in complex sequences of well-log data.\n Sequential models learn geological patterns and petrophysical trends to detect events across depths of well-log data. However, vanishing gradients, exploding gradients and the limits of convolutional filters, limit the diagnosis of ultra-deep wells in complex subsurface information. Vast number of operations required to detect events between two subsurface points at large separation limits them. Transformers-based Models (TbMs) rely on non-sequential modelling that uses self-attention to relate information from different positions in the sequence of well-log, allowing to create an end-to-end, non-sequential, parallel memory network. We use approximately 21 million data points from 21 wells of Volve for the experiment.\n LSTMs, in addition to auto-regression (AR), autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) conventionally models the events in the time-series well-logs. However, complex global dependencies to detect events in heterogeneous subsurface are challenging for these sequence models. In the presented work we begin with one meter depth of data from Volve, an oil-field in the North Sea, and then proceed up to 1000 meters. Initially LSTMs and ARIMA models were acceptable, as depth increased beyond a few 100 meters their diagnosis started underperforming and a new methodology was required. TbMs have already outperformed several models in large sequences modelling for natural language processing tasks, thus they are very promising to model well-log data with very large depth separation. We scale features and labels according to the maximum and minimum value present in the training dataset and then use the sliding window to get training and evaluation data pairs from well-logs. Additional subsurface features were able to encode some information in the conventional sequential models, but the result did not compare significantly with the TbMs. TbMs achieved Root Mean Square Error of 0.27 on scale of (0-1) while diagnosing the depth up to 5000 meters.\n This is the first paper to show successful application of Transformer-based deep learning models for well-log diagnosis. Presented model uses a self-attention mechanism to learn complex dependencies and non-linear events from the well-log data. Moreover, the experimental setting discussed in the paper will act as a generalized framework for data from ultra-deep wells and their extremely heterogeneous subsurface environment.","PeriodicalId":10981,"journal":{"name":"Day 4 Thu, November 18, 2021","volume":"38 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 4 Thu, November 18, 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/208109-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4

Abstract

A single well from any mature field produces approximately 1.7 million Measurement While Drilling (MWD) data points. We either use cross-correlation and covariance measurement, or Long Short-Term Memory (LSTM) based Deep Learning algorithms to diagnose long sequences of extremely noisy data. LSTM's context size of 200 tokens barely accounts for the entire depth. Proposed work develops application of Transformer-based Deep Learning algorithm to diagnose and predict events in complex sequences of well-log data. Sequential models learn geological patterns and petrophysical trends to detect events across depths of well-log data. However, vanishing gradients, exploding gradients and the limits of convolutional filters, limit the diagnosis of ultra-deep wells in complex subsurface information. Vast number of operations required to detect events between two subsurface points at large separation limits them. Transformers-based Models (TbMs) rely on non-sequential modelling that uses self-attention to relate information from different positions in the sequence of well-log, allowing to create an end-to-end, non-sequential, parallel memory network. We use approximately 21 million data points from 21 wells of Volve for the experiment. LSTMs, in addition to auto-regression (AR), autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) conventionally models the events in the time-series well-logs. However, complex global dependencies to detect events in heterogeneous subsurface are challenging for these sequence models. In the presented work we begin with one meter depth of data from Volve, an oil-field in the North Sea, and then proceed up to 1000 meters. Initially LSTMs and ARIMA models were acceptable, as depth increased beyond a few 100 meters their diagnosis started underperforming and a new methodology was required. TbMs have already outperformed several models in large sequences modelling for natural language processing tasks, thus they are very promising to model well-log data with very large depth separation. We scale features and labels according to the maximum and minimum value present in the training dataset and then use the sliding window to get training and evaluation data pairs from well-logs. Additional subsurface features were able to encode some information in the conventional sequential models, but the result did not compare significantly with the TbMs. TbMs achieved Root Mean Square Error of 0.27 on scale of (0-1) while diagnosing the depth up to 5000 meters. This is the first paper to show successful application of Transformer-based deep learning models for well-log diagnosis. Presented model uses a self-attention mechanism to learn complex dependencies and non-linear events from the well-log data. Moreover, the experimental setting discussed in the paper will act as a generalized framework for data from ultra-deep wells and their extremely heterogeneous subsurface environment.
基于变压器的测井处理和质量控制深度学习模型——基于复杂序列的全局依赖性建模
任何成熟油田的单口井都会产生大约170万个随钻测量(MWD)数据点。我们要么使用互相关和协方差测量,要么使用基于长短期记忆(LSTM)的深度学习算法来诊断长序列的极度噪声数据。LSTM的200个令牌的上下文大小几乎不能满足整个深度。建议的工作是开发基于变压器的深度学习算法的应用,以诊断和预测复杂测井数据序列中的事件。序列模型学习地质模式和岩石物理趋势,以检测测井数据深度的事件。然而,消失梯度、爆炸梯度和卷积滤波器的局限性限制了超深井在复杂地下信息中的诊断。探测两个地下点之间的大间距事件需要大量的操作,这限制了它们。基于变压器的模型(tbm)依赖于非顺序建模,利用自关注将测井序列中不同位置的信息联系起来,从而创建一个端到端、非顺序的并行存储网络。我们在实验中使用了来自Volve 21口井的大约2100万个数据点。除了自回归(AR)、自回归移动平均(ARMA)和自回归综合移动平均(ARIMA)之外,lstm通常对时间序列测井中的事件进行建模。然而,对于这些序列模型来说,检测异构地下事件的复杂全局依赖性是一个挑战。在介绍的工作中,我们从北海Volve油田的一米深度数据开始,然后继续进行到1000米。最初lstm和ARIMA模型是可以接受的,随着深度超过100米,它们的诊断开始表现不佳,需要一种新的方法。在自然语言处理任务的大序列建模中,tbm的表现已经超过了几种模型,因此它们非常有希望对非常大深度分离的测井数据进行建模。我们根据训练数据集中存在的最大值和最小值缩放特征和标签,然后使用滑动窗口从测井数据中获得训练和评估数据对。附加的地下特征能够在传统的序列模型中编码一些信息,但结果与tbm没有显著的比较。tbm在(0-1)范围内诊断深度可达5000米,均方根误差为0.27。这是第一篇展示基于transformer的深度学习模型在测井诊断中的成功应用的论文。该模型利用自关注机制从测井数据中学习复杂的依赖关系和非线性事件。此外,本文讨论的实验设置将作为超深井数据及其极其不均匀的地下环境的广义框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信