Making the black-box brighter: Interpreting machine learning algorithm for forecasting drilling accidents

Ekaterina Gurina, Nikita Klyuchnikov, Ksenia Antipova, Dmitry Koroteev

Research output: Contribution to journalArticlepeer-review

Abstract

We present an approach for interpreting a black-box alarming system for forecasting accidents and anomalies during the drilling of oil and gas wells. The interpretation methodology aims to explain the local behavior of the accident predictive model to drilling engineers. The explanatory model uses Shapley additive explanations analysis of features, obtained through Bag-of-features representation of telemetry logs used during the drilling accident forecasting phase. Validation shows that the explanatory model has 15% precision at 70% recall, and overcomes the metric values of a random baseline and multi-head attention neural network. These results justify that the developed explanatory model is better aligned with explanations of drilling engineers, than the state-of-the-art method. The joint performance of explanatory and Bag-of-features models allows drilling engineers to understand the logic behind the system decisions at the particular moment, pay attention to highlighted telemetry regions, and correspondingly, increase the trust level in the accident forecasting alarms.

Original languageEnglish
Article number111041
JournalJournal of Petroleum Science and Engineering
Volume218
DOIs
Publication statusPublished - Nov 2022

Keywords

  • Bag-of-features
  • Drilling
  • Interpretability
  • Machine learning
  • Telemetry

Fingerprint

Dive into the research topics of 'Making the black-box brighter: Interpreting machine learning algorithm for forecasting drilling accidents'. Together they form a unique fingerprint.

Cite this