Are you getting the most out of your inline inspection data?

Intelligent pigging, or inline inspection, has been around for close on 40 years and has seen vast changes in technology, accuracy and scope of use. These advances have created new challenges.

Without the right approach to the vast amounts of data available, you could be missing out on substantial improvements in safety and performance.

Synergi Pipeline inspection
Synergi Pipeline for effective reporting
Synergi Pipeline for offshore

Intelligent pigging

For many, intelligent pigging is the primary instrument of an integrity management program, but in most cases pipeline operators are not realizing the full business value of the data. This means that they are missing out on substantial improvements of their pipeline performance and safety. In parallel with the advances in technology for inspection and information collection, the software capabilities have risen exponentially. When integrity engineers realize the full benefits of their inline inspection data, the changes will be far reaching and will impact the effectiveness of international standards such as ISO 55000 and the operational excellence initiatives of every pipeline owner and operator, globally.

In this article I am going to make the connection between these potential benefits and the advances in IT and software technology which we at DNV - Digital Solutions have been exploring and bringing to market in our pipeline integrity product, Synergi Pipeline.

The pipeline industry has what I would describe as a “big data like” problem, satisfying to a large degree the defining conditions of big data in the form of high volume, great variety and frequency of update. I say “like” because, although not of the absolute size of some data gathering and mining needs such as those tackled by large financial systems, it certainly does present itself as a problem with many of the same challenges and data complexities. We have to be honest with ourselves – for the most part we have been pretty conservative as an industry in how we have utilized data in assessing pipeline condition, risk levels and operational performance. We must learn from other industries, or other parts of our own industry, and adopt technologies and methodologies that can help advance pipeline assessment and risk management. This leads us to a vast list of possibilities and challenges.

Data management for inline inspections

Where does all the data we need come from? Where should we be looking for data that can help us? Before we get a pipeline into operation, we already have a substantial volumetric data challenge – considering that we have data from design, mill records, as-built details, weld inspections and coating inspections, as well as pre-commissioning caliper and increasingly from intelligent pigging fingerprinting. Managing this information effectively from the outset is not only financially smart, it is technically smart. It is the foundation of our future integrity program. Add to that, it is also becoming mandatory in many jurisdictions.

However, this is only the beginning of the challenge. In the operational phase we are faced with the potential of extremely large data volumes to manage, which increase with the age of the asset. Many additional sources of information are gathered, including SCADA data important for fatigue analysis, corrosion inspection reports, ground movement sensors and of course inline inspection data. Inline inspection data can have a very high volume, depending on the type of inspection and the condition of the pipeline. Consider also that this is a multidimensional data problem with linear in-pipe or above-ground distance, spatial (GPS coordinates), pipe circumferential location as well as a time dimension. As the asset ages, operators need to mine, review and spot trends in the data so they can preempt problems that will elevate risk levels. Left unchecked, these risks could lead to asset failure. Think of this as looking for the proverbial needle in a haystack. How do we map a path to make sure that we find the needle?

Five ways pipeline integrity software can help

Any effective pipeline integrity program must address this, and pipeline integrity software plays a pivotal role. Here are the five most important ways in which your pipeline integrity software should help you:

  • In managing the data and integrity processes. Can you get to the information in an efficient and timely manner to make effective real-time decisions?
  • In finding the needle in the haystack. It must do the calculations and help lead you to the problem areas, utilizing analytical modelling, corrosion process models, expert (or knowledge) based models and assessment methods, statistical based data mining and analytics or pattern recognition approaches. It must also incorporate linear and spatial analysis to identify causal or potentially causal relationships between defected anomalies and the pipe environment.
  • In providing excellent domain-centric visualization tools. The software is your eyes’ best friend. How good can your eyes be at finding the needle? Well, the answer is that they are probably the most powerful tools we have available to us. But our eyes do need a little help focusing on the meaningful relationships in the data. Effective software systems help by presenting information intuitively and in a way that helps us rationalize complex information relationships.
  • In effective reporting. This isn’t limited to the must-haves and routine reports, but includes delivering complex results so they are clearly understood by decision-makers and lead to effective performance-enabling actions.
  • By easy integration with your enterprise systems. A fully integrated solution can be a full participant in your operational risk management program.

This is the basis upon which DNV’s Synergi Pipeline software is built, and how our consulting team deploys solutions. Central to this is how we manage inline inspection data and the business processes – from inspection planning through to analysis and actions.

With storage and retrieval, the pipeline open data standard (PODS) gives us an outline data model for the standardized storage of ILI data. But that is only a part of what is required in delivering an effective pipeline integrity solution, where we are working with millions of pipeline features per inspection run and per section of pipeline. With Synergi Pipeline, we go beyond this and utilize advanced database storage and retrieval methods to improve access and retrieval responses. These include interactive applications, such as dynamic alignment sheets, and offline analytics, such as risk assessments. Engineers using our software have many options for interacting with their data, allowing them to spot locational and time-dependent trends that could impact future pipeline performance.

Specialist analytical tools are essential for making sense of inline inspection data. These include many of the standard corrosion, crack and dent assessment methods as well as statistical analysis. Together they can assist in identifying trends and correlations between inline inspection features and pipeline properties (for example grade, manufacturer, age, coating type and operating conditions) and their relationship to the likelihood of pipeline failure.

The future of pipeline integrity

The new technologies are central to the future of pipeline integrity management. They will help us identify and examine apparent causal relationships that we might otherwise have overlooked. Quantitative defect and risk assessments are of fundamental importance in providing measurable factors that impact the accuracy and variability of our analysis. An example of this is in calculating the probability of failure of a pipe section based upon ILI data where we need to verify and incorporate vendor performance metrics, such as probability of detection and predicted depth estimate variance, in order to understand the true likelihood of failure. We must take into account imperfect measurements or imperfect knowledge of pipeline properties and GIS for spatial context, such as soil types and proximity or high voltage power lines affecting corrosion growth. In addition, operational pressure histories let us calculate fatigue effects in the presence of corrosion or other defects, including how this changes over time.

Pipeline integrity should not be treated as an island within your organization and become disconnected from the rest of your business. The fitness for service, reliability and risk levels of a pipeline are critical in optimizing and managing operational performance and risk. This perspective illustrates how valuable inline inspection data can be to a pipeline operator. Control rooms and asset performance teams must provide on-time, accurate and consistent information to other departments within the organization to set the stage for optimal and time-critical decisions that impact safety and operational performance. For example, asset condition affects potential throughput and security of supply. In order to ensure that this is best managed, it must be integrated with state-of-the-art network optimization and planning tools, such as DNV’s Synergi Gas for gas distribution or Synergi Pipeline Simulator for pipeline analysis. Close links must also be made to control room applications, for example leak detection solutions, such as that provided by Synergi Pipeline Simulator. This will maximize leak sensitivity and enable optimal detection where the likelihood of failure that we calculated from our inline inspection data is in effect our starting belief of the chance of a leak occurring in the pipeline due to corrosion or stress corrosion cracking. Combining these has the potential to greatly improve leak detection performance and as part of an emergency response system can be critical in determining effective responses and follow-up actions in the case of incidents.

Overlooked information sources

Finally, I will return to the ‘big data’ theme, looking to the future of how we can increase utilization of our inline inspection data. One of the largely overlooked and certainly underutilized information sources is that described in big data terminology as unstructured data. By that we mean that it’s not stored in the typical relational or spatial databases, such as field reports, email communications or reports with expert opinions and exchanges including understanding from domain and pipeline-specific experts. Think also of those companies that use internal social media for business purposes. On platforms such as Yammer, junior and senior engineers alike exchange valuable information daily, on operational and more general professional subjects. How can we make any possible use of this? Well, think of Google, and how they crawl through websites and pick up keywords and expressions, making the empirical correlation with search terms to return content of potential interest, including a priority rating. Think of the power this could bring to help understand the cause of specific conditions and possible problems with your pipeline. Now combine this with input from more conventional sources, such as the results of the advanced analytics we described earlier and you begin to see the future of pipeline integrity analysis.

Author:

Tom Gilmour, Strategic Product Director, DNV - Software

1/2/2015 3:44:19 PM

Do you need more information?

Have a look at our brochure, videos and downloads

DEMO

DEMO

Register for a Synergi Pipeline demo