Uncertainty Quantification and Optimization of Deep Learning for Fracture Recognition

R. Santoso, Xupeng He, M. AlSinan, H. Kwak, H. Hoteit
{"title":"Uncertainty Quantification and Optimization of Deep Learning for Fracture Recognition","authors":"R. Santoso, Xupeng He, M. AlSinan, H. Kwak, H. Hoteit","doi":"10.2118/204863-ms","DOIUrl":null,"url":null,"abstract":"\n Automatic fracture recognition from borehole images or outcrops is applicable for the construction of fractured reservoir models. Deep learning for fracture recognition is subject to uncertainty due to sparse and imbalanced training set, and random initialization. We present a new workflow to optimize a deep learning model under uncertainty using U-Net. We consider both epistemic and aleatoric uncertainty of the model. We propose a U-Net architecture by inserting dropout layer after every \"weighting\" layer. We vary the dropout probability to investigate its impact on the uncertainty response. We build the training set and assign uniform distribution for each training parameter, such as the number of epochs, batch size, and learning rate. We then perform uncertainty quantification by running the model multiple times for each realization, where we capture the aleatoric response. In this approach, which is based on Monte Carlo Dropout, the variance map and F1-scores are utilized to evaluate the need to craft additional augmentations or stop the process. This work demonstrates the existence of uncertainty within the deep learning caused by sparse and imbalanced training sets. This issue leads to unstable predictions. The overall responses are accommodated in the form of aleatoric uncertainty. Our workflow utilizes the uncertainty response (variance map) as a measure to craft additional augmentations in the training set. High variance in certain features denotes the need to add new augmented images containing the features, either through affine transformation (rotation, translation, and scaling) or utilizing similar images. The augmentation improves the accuracy of the prediction, reduces the variance prediction, and stabilizes the output. Architecture, number of epochs, batch size, and learning rate are optimized under a fixed-uncertain training set. We perform the optimization by searching the global maximum of accuracy after running multiple realizations. Besides the quality of the training set, the learning rate is the heavy-hitter in the optimization process. The selected learning rate controls the diffusion of information in the model. Under the imbalanced condition, fast learning rates cause the model to miss the main features. The other challenge in fracture recognition on a real outcrop is to optimally pick the parental images to generate the initial training set. We suggest picking images from multiple sides of the outcrop, which shows significant variations of the features. This technique is needed to avoid long iteration within the workflow. We introduce a new approach to address the uncertainties associated with the training process and with the physical problem. The proposed approach is general in concept and can be applied to various deep-learning problems in geoscience.","PeriodicalId":11320,"journal":{"name":"Day 3 Tue, November 30, 2021","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2021-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 3 Tue, November 30, 2021","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/204863-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Automatic fracture recognition from borehole images or outcrops is applicable for the construction of fractured reservoir models. Deep learning for fracture recognition is subject to uncertainty due to sparse and imbalanced training set, and random initialization. We present a new workflow to optimize a deep learning model under uncertainty using U-Net. We consider both epistemic and aleatoric uncertainty of the model. We propose a U-Net architecture by inserting dropout layer after every "weighting" layer. We vary the dropout probability to investigate its impact on the uncertainty response. We build the training set and assign uniform distribution for each training parameter, such as the number of epochs, batch size, and learning rate. We then perform uncertainty quantification by running the model multiple times for each realization, where we capture the aleatoric response. In this approach, which is based on Monte Carlo Dropout, the variance map and F1-scores are utilized to evaluate the need to craft additional augmentations or stop the process. This work demonstrates the existence of uncertainty within the deep learning caused by sparse and imbalanced training sets. This issue leads to unstable predictions. The overall responses are accommodated in the form of aleatoric uncertainty. Our workflow utilizes the uncertainty response (variance map) as a measure to craft additional augmentations in the training set. High variance in certain features denotes the need to add new augmented images containing the features, either through affine transformation (rotation, translation, and scaling) or utilizing similar images. The augmentation improves the accuracy of the prediction, reduces the variance prediction, and stabilizes the output. Architecture, number of epochs, batch size, and learning rate are optimized under a fixed-uncertain training set. We perform the optimization by searching the global maximum of accuracy after running multiple realizations. Besides the quality of the training set, the learning rate is the heavy-hitter in the optimization process. The selected learning rate controls the diffusion of information in the model. Under the imbalanced condition, fast learning rates cause the model to miss the main features. The other challenge in fracture recognition on a real outcrop is to optimally pick the parental images to generate the initial training set. We suggest picking images from multiple sides of the outcrop, which shows significant variations of the features. This technique is needed to avoid long iteration within the workflow. We introduce a new approach to address the uncertainties associated with the training process and with the physical problem. The proposed approach is general in concept and can be applied to various deep-learning problems in geoscience.
裂缝识别深度学习的不确定性量化与优化
井眼图像或露头裂缝自动识别适用于裂缝性储层模型的建立。由于训练集的稀疏性和不平衡性以及初始化的随机性,裂缝识别的深度学习存在不确定性。提出了一种利用U-Net优化不确定条件下深度学习模型的新工作流程。我们考虑了模型的认知不确定性和任意不确定性。我们提出了一种U-Net架构,在每个“加权”层之后插入dropout层。我们通过改变退出概率来研究其对不确定性响应的影响。我们建立训练集,并为每个训练参数分配均匀分布,如epoch数、batch大小和学习率。然后,我们通过对每个实现多次运行模型来执行不确定性量化,其中我们捕获任意响应。在这种基于蒙特卡罗Dropout的方法中,方差图和f1分数被用来评估是否需要制作额外的增强或停止该过程。这项工作证明了深度学习中存在由稀疏和不平衡训练集引起的不确定性。这个问题会导致不稳定的预测。总体反应以任意不确定性的形式进行调节。我们的工作流程利用不确定性响应(方差图)作为度量,在训练集中制作额外的扩展。某些特征的高方差表示需要通过仿射变换(旋转、平移和缩放)或利用类似的图像添加包含这些特征的新增强图像。增广提高了预测的准确性,减少了方差预测,稳定了输出。在固定不确定的训练集下,对结构、epoch数、batch大小和学习率进行了优化。我们在运行多个实现后,通过搜索全局最大精度来进行优化。除了训练集的质量外,学习率是优化过程中的重头戏。选择的学习率控制信息在模型中的扩散。在不平衡条件下,快速的学习率会导致模型错过主要特征。真实露头裂缝识别的另一个挑战是如何选择最优的父图像来生成初始训练集。我们建议从露头的多个侧面采集图像,这显示了特征的显著变化。需要使用这种技术来避免工作流中的长时间迭代。我们引入了一种新的方法来解决与训练过程和物理问题相关的不确定性。所提出的方法在概念上是通用的,可以应用于地球科学中的各种深度学习问题。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
文献相关原料
公司名称 产品信息 采购帮参考价格
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信