Johannes Palmer, Aaron Schartner, A. Danilov, Vincent Tse
{"title":"Concerted, Computing-Intense Novel MFL Approach Ensuring Reliability and Reducing the Need for Dig Verification","authors":"Johannes Palmer, Aaron Schartner, A. Danilov, Vincent Tse","doi":"10.1115/IPC2020-9361","DOIUrl":null,"url":null,"abstract":"\n Magnetic Flux Leakage (MFL) is a robust technology with high data coverage. Decades of continuous sizing improvement allowed for industry-accepted sizing reliability. The continuous optimization of sizing processes ensures accurate results in categorizing metal loss features. However, the identified selection of critical anomalies is not always optimal; sometimes anomalies are dug up too early or unnecessarily, this can be caused by the feature type in the field (true metal loss shape) being incorrectly identified which affects sizing and tolerance. In addition, there is the possibility for incorrectly identifying feature types causing false under-calls.\n Today, complex empirical formulas together with multifaceted lookup tables fed by pull tests, synthetic data, dig verifications, machine learning, artificial intelligence and last but not least human expertise translate MFL signals into metal loss assessments with high levels of success. Nevertheless, two important principal elements are limiting the possible MFL sizing optimization. One is the empirical character of the signal interpretation. The other is the implicitly induced data and result simplification.\n The reason to go this principal route for many years is simple: it is methodologically impossible to calculate the metal source geometry directly from the signals. In addition, the pure number of possible relevant geometries is so large that simplification is necessary and inevitable. Moreover, the second methodological reason is the ambiguity of the signal, which defines the target of metal loss sizing as the most probable solution. However, even under the best conditions, the most probable one is not necessarily the correct one.\n This paper describes a novel, fundamentally different approach as a basic alternative to the common MFL-analysis approach described above. A calculation process is presented, which overcomes the empirical nature of traditional approaches by using a result optimization method that relies on intense computing and avoids any simplification. Additionally, the strategy to overcome MFL ambiguity will be shown. Together with the operator, detailed blind-test examples demonstrate the enormous level of detail, repeatability and accuracy of this groundbreaking technological method with the potential to reduce tool tolerance, increase sizing accuracy, increase growth rate accuracy, and help optimize the dig program to target critical features with greater confidence.","PeriodicalId":273758,"journal":{"name":"Volume 1: Pipeline and Facilities Integrity","volume":"7 12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Volume 1: Pipeline and Facilities Integrity","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1115/IPC2020-9361","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Magnetic Flux Leakage (MFL) is a robust technology with high data coverage. Decades of continuous sizing improvement allowed for industry-accepted sizing reliability. The continuous optimization of sizing processes ensures accurate results in categorizing metal loss features. However, the identified selection of critical anomalies is not always optimal; sometimes anomalies are dug up too early or unnecessarily, this can be caused by the feature type in the field (true metal loss shape) being incorrectly identified which affects sizing and tolerance. In addition, there is the possibility for incorrectly identifying feature types causing false under-calls.
Today, complex empirical formulas together with multifaceted lookup tables fed by pull tests, synthetic data, dig verifications, machine learning, artificial intelligence and last but not least human expertise translate MFL signals into metal loss assessments with high levels of success. Nevertheless, two important principal elements are limiting the possible MFL sizing optimization. One is the empirical character of the signal interpretation. The other is the implicitly induced data and result simplification.
The reason to go this principal route for many years is simple: it is methodologically impossible to calculate the metal source geometry directly from the signals. In addition, the pure number of possible relevant geometries is so large that simplification is necessary and inevitable. Moreover, the second methodological reason is the ambiguity of the signal, which defines the target of metal loss sizing as the most probable solution. However, even under the best conditions, the most probable one is not necessarily the correct one.
This paper describes a novel, fundamentally different approach as a basic alternative to the common MFL-analysis approach described above. A calculation process is presented, which overcomes the empirical nature of traditional approaches by using a result optimization method that relies on intense computing and avoids any simplification. Additionally, the strategy to overcome MFL ambiguity will be shown. Together with the operator, detailed blind-test examples demonstrate the enormous level of detail, repeatability and accuracy of this groundbreaking technological method with the potential to reduce tool tolerance, increase sizing accuracy, increase growth rate accuracy, and help optimize the dig program to target critical features with greater confidence.