Automatic Classification of PDC Cutter Damage Using a Single Deep Learning Neural Network Model

Abdulbaset Ali, Harnoor Singh, Daniel Kelly, Donald G. Hender, Alan Clarke, Mohammad Mahdi Ghiasi, Ronald Haynes, Lesley James
{"title":"Automatic Classification of PDC Cutter Damage Using a Single Deep Learning Neural Network Model","authors":"Abdulbaset Ali, Harnoor Singh, Daniel Kelly, Donald G. Hender, Alan Clarke, Mohammad Mahdi Ghiasi, Ronald Haynes, Lesley James","doi":"10.2118/212503-ms","DOIUrl":null,"url":null,"abstract":"\n There is considerable value in automatically quantifying cutter damage from drill bit pictures. Current approaches do not classify cutter damage by type, i.e., broken, chipped, lost, etc. We, therefore, present a computer vision model using deep learning neural networks to automate multi-type damage detection in Polycrystalline Diamond Compact (PDC) drill bit cutters.\n The automated bit damage detection approach presented in this paper is based on training a computer vision model on different cutter damage types aimed at detecting and classifying damaged cutters directly. Prior approaches detected cutters first and then classified the damage type for the detected cutters. The You Only Look Once version 5 (YOLOv5) algorithm was selected based on the findings of an earlier published study. Different models of YOLOv5 were trained with different architecture sizes with various optimizers using two-dimensional (2D) drill bit images provided by the SPE Drilling Uncertainty Prediction technical section (DUPTS) and labeled by the authors with training from industry subject matter experts. To achieve the modeling goal, the images were first annotated and labeled to create training, validation, and testing sub-datasets. Then, by changing brightness and color, the images allocated for the training phase were augmented to generate more samples for the model development. The categories defined for labeling the DUPTS dataset were bond failure, broken cutter, chipped cutter, lost cutter, worn cutter, green cutter, green gauge, core out, junk damage, ring out, and top view. These categories can be updated once the IADC upgrade committee finishes upgrading IADC dull bit grading cones.\n Trained models were validated using the validation dataset of 2D images. It showed that the large YOLOv5 with stochastic gradient descent (SGD) optimizer achieved the highest metrics with a short training cycle compared to the Adam optimizer. In addition, the model was tested using an unseen data set collected from the local office of a drill bit supplier. Testing results illustrated a high level of performance. However, it was observed that inconsistency and quality of rig site drill bit photos reduce model accuracy. Therefore, it is suggested that companies produce large sets of quality images for developing better models. This study successfully demonstrates the integration of computer vision and machine learning for drill bit grading by categorizing/classifying damaged cutters by type directly in one stage rather than detecting the cutters first and then classifying them in a second stage. To guarantee the deployed model's robustness and consistency the model deployment has been tested in different environments that include cloud platform, container on a local machine, and cloud platform as a service (PaaS) with an online web app. In addition, the model can detect ring out and cored damages from the top view drill bit images, and to the best of the authors’ knowledge, this has not been addressed by any study before.\n The novelty of the developed deep learning computer vision algorithm is the ability to detect different cutter damage types in a fast and efficient process compared to the current lengthy manual damage evaluation practice. Furthermore, the trained model can detect damages that frequently take place in more than one blade of the bit such as ring outs and coring. In addition, a user-friendly interface was developed that generates results in pdf and CSV file formats for further data analysis, visualization, and documentation. Also, all the technologies used in the development of the model are open source and we made our web app implementation open access.","PeriodicalId":103776,"journal":{"name":"Day 2 Wed, March 08, 2023","volume":"83 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Day 2 Wed, March 08, 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2118/212503-ms","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

There is considerable value in automatically quantifying cutter damage from drill bit pictures. Current approaches do not classify cutter damage by type, i.e., broken, chipped, lost, etc. We, therefore, present a computer vision model using deep learning neural networks to automate multi-type damage detection in Polycrystalline Diamond Compact (PDC) drill bit cutters. The automated bit damage detection approach presented in this paper is based on training a computer vision model on different cutter damage types aimed at detecting and classifying damaged cutters directly. Prior approaches detected cutters first and then classified the damage type for the detected cutters. The You Only Look Once version 5 (YOLOv5) algorithm was selected based on the findings of an earlier published study. Different models of YOLOv5 were trained with different architecture sizes with various optimizers using two-dimensional (2D) drill bit images provided by the SPE Drilling Uncertainty Prediction technical section (DUPTS) and labeled by the authors with training from industry subject matter experts. To achieve the modeling goal, the images were first annotated and labeled to create training, validation, and testing sub-datasets. Then, by changing brightness and color, the images allocated for the training phase were augmented to generate more samples for the model development. The categories defined for labeling the DUPTS dataset were bond failure, broken cutter, chipped cutter, lost cutter, worn cutter, green cutter, green gauge, core out, junk damage, ring out, and top view. These categories can be updated once the IADC upgrade committee finishes upgrading IADC dull bit grading cones. Trained models were validated using the validation dataset of 2D images. It showed that the large YOLOv5 with stochastic gradient descent (SGD) optimizer achieved the highest metrics with a short training cycle compared to the Adam optimizer. In addition, the model was tested using an unseen data set collected from the local office of a drill bit supplier. Testing results illustrated a high level of performance. However, it was observed that inconsistency and quality of rig site drill bit photos reduce model accuracy. Therefore, it is suggested that companies produce large sets of quality images for developing better models. This study successfully demonstrates the integration of computer vision and machine learning for drill bit grading by categorizing/classifying damaged cutters by type directly in one stage rather than detecting the cutters first and then classifying them in a second stage. To guarantee the deployed model's robustness and consistency the model deployment has been tested in different environments that include cloud platform, container on a local machine, and cloud platform as a service (PaaS) with an online web app. In addition, the model can detect ring out and cored damages from the top view drill bit images, and to the best of the authors’ knowledge, this has not been addressed by any study before. The novelty of the developed deep learning computer vision algorithm is the ability to detect different cutter damage types in a fast and efficient process compared to the current lengthy manual damage evaluation practice. Furthermore, the trained model can detect damages that frequently take place in more than one blade of the bit such as ring outs and coring. In addition, a user-friendly interface was developed that generates results in pdf and CSV file formats for further data analysis, visualization, and documentation. Also, all the technologies used in the development of the model are open source and we made our web app implementation open access.
基于单一深度学习神经网络模型的PDC刀具损伤自动分类
从钻头图像中自动量化刀具损坏具有相当大的价值。目前的方法没有按类型对刀具损坏进行分类,即破碎、切屑、丢失等。因此,我们提出了一种使用深度学习神经网络的计算机视觉模型,用于自动检测聚晶金刚石(PDC)钻头切削齿的多类型损伤。本文提出的自动钻头损伤检测方法是基于对不同类型刀具损伤的计算机视觉模型的训练,目的是直接检测和分类损坏的刀具。先前的方法首先检测刀具,然后对检测到的刀具进行损伤类型分类。You Only Look Once version 5 (YOLOv5)算法的选择是基于先前发表的一项研究的结果。使用SPE钻井不确定性预测技术部分(DUPTS)提供的二维(2D)钻头图像,使用各种优化器对YOLOv5的不同模型进行了不同结构尺寸的训练,并由作者进行了行业主题专家的培训。为了实现建模目标,首先对图像进行注释和标记,以创建训练、验证和测试子数据集。然后,通过改变亮度和颜色,增强分配给训练阶段的图像,生成更多的样本用于模型开发。为标记DUPTS数据集定义的类别包括粘接失效、刀具破损、刀具切屑、刀具丢失、刀具磨损、刀具未加工、刀具未加工、出芯、垃圾损坏、出环和顶视图。一旦IADC升级委员会完成IADC钝位分级锥体的升级,这些类别就可以更新。使用二维图像验证数据集对训练好的模型进行验证。结果表明,与Adam优化器相比,带有随机梯度下降(SGD)优化器的大型YOLOv5在较短的训练周期内获得了最高的指标。此外,使用从钻头供应商当地办事处收集的未见数据集对该模型进行了测试。测试结果显示了高水平的性能。然而,现场钻头照片的不一致性和质量降低了模型的准确性。因此,建议公司生产大量高质量的图像集,以开发更好的模型。该研究成功地展示了计算机视觉和机器学习在钻头分级中的集成,在一个阶段直接按类型对损坏的刀具进行分类/分类,而不是先检测刀具然后在第二阶段对其进行分类。为了保证部署模型的鲁棒性和一致性,模型部署已经在不同的环境中进行了测试,包括云平台、本地机器上的容器和云平台即服务(PaaS),并使用在线web应用程序。此外,该模型可以从顶视图钻头图像中检测出环和岩心损坏,据作者所知,这是之前没有任何研究解决的问题。与目前冗长的人工损伤评估实践相比,所开发的深度学习计算机视觉算法的新颖之处在于能够快速有效地检测不同的刀具损伤类型。此外,经过训练的模型还可以检测出钻头多个刀片上经常发生的损坏,例如出环和取心。此外,还开发了一个用户友好的界面,该界面生成pdf和CSV文件格式的结果,用于进一步的数据分析、可视化和文档编制。此外,模型开发中使用的所有技术都是开源的,我们使我们的web应用程序实现开放访问。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信