Case Study: Improvements in a Piping Program Yield a 3X ROI

Learn how we implemented a piping reliability program that turned a compliance project into a 3X Return on Investment (ROI) for a refiner. The implementation of this program was an important, calculated step in the operator’s evolution toward an effective RBI program.


This refiner lacked a standardized inspection program and needed to proactively identify and mitigate its loss of containment (LOC) risks to meet compliance.


Pinnacle implemented asset strategies for the operator’s piping across four sites which included appropriate considerations of the operations and maintenance practices of the piping.


This refiner recognized 3X ROI and safely reduced the number of CMLs by 27.4%.


The impact of LOC events can range from a loss of profit to serious Health, Safety, and Environment (HSE) consequences. A strong, integrated mechanical integrity (MI) program can help facilities satisfy compliance regulations, improve reliability performance, and prevent LOC events from occurring. Having standardized, scalable asset strategies that strategically target inspections will help facilities proactively identify potential risks, understand those risks and drivers, and prevent LOC events before they occur.

The Challenge

This refiner experienced two massive LOC events at multiple sites, which resulted in astronomical expenses and compliance violations. One of the events was caused by a leak in a section of insulated carbon steel piping that had thinned over time due to corrosion. As a result of the event, the refiner was required to implement inspection strategies across various piping classes to prevent future leaks.

Before the events, the refiner struggled to proactively identify and mitigate LOC risks and lacked an integrated, holistic MI program. Additionally, there was no formal system in place to flag assets that violated the acceptable range for process operating conditions. Further, some sites relied more heavily on the knowledge of experienced materials engineers and inspectors than others, and as a result, the quality of document organization varied by site.

To address these challenges and meet compliance, the operator needed to develop and implement a series of inspection strategies across its fixed equipment and piping that would enable employees to proactively identify, manage, and mitigate LOC risks at a system level and satisfy Recognized and Generally Accepted Good Engineering Practices (RAGAGEP). Specifically, these strategies needed to include a defined set of integrity operating windows (IOWs), change management criteria, and processes that would guide when to act on assets before they violated the acceptable process condition ranges.

Initially, the refiner attempted to implement these strategies on its own and quickly realized it needed additional resources and MI expertise. Additionally, some site leadership worried that they would not have the necessary resources to keep up with the increased level of inspection required when the initial and recurring inspection intervals started to overlap. Ultimately, Pinnacle was brought in to implement a standardized program that would strategically target areas to inspect and could be replicated across all sites.

Pinnacle’s Solution

As part of the solution, the Pinnacle team worked with the refiner to create a set of corporate piping standards. These standards, which were rolled out across four sites, focused on improving the operation and maintenance of the operator’s fixed equipment and preserving the piping’s pressure boundaries. The primary goal of the implementation was to provide the refiner with standardized drawings that could be leveraged by multiple sites and disciplines, including inspectors, processes, designs, and turnaround planning.

The scope of the project included the following:


The objective of this step was to create a solid, system-level foundation for the inspection strategies in an expedited timeline. During this stage, the team developed a deep understanding of the operator’s piping, operational elements, and equipment by gathering and systemizing critical asset data from the piping process flow diagrams (PFDs), piping and instrumentation diagrams (P&IDs), and process data.

System-Level Damage Mechanism Identification (DMR) & IOWs

After all critical data was gathered from existing documentation, the Pinnacle team identified the damage mechanisms and associated inspection strategies that could apply to those systems. Each system had specific damage mechanisms based on stream chemistry, materials of construction, and operating conditions. In addition to identifying whether the damage mechanism was local or generalized, the strategies specified whether the failure mode was cracking or thinning and the type of LOC event that was likely to occur if the asset failed.

After the system-level damage mechanisms were assigned, the team implemented a series of IOWs to help the facility identify potential damage mechanisms and the associated parameters that cause the damage.

With an IOW system in place, operators will receive an alert when the IOW parameter exceeds or crosses a specific operating threshold and can then adjust the facility’s conditions to bring the parameter back into the normal operating range, if feasible, or take other action as appropriate. The series of IOWs was prioritized by the level of urgency and classified the time and magnitude of potential failure. Some parameters, like sulfur content or temperature, may be inherent to the process. Regardless, to attain the desired reliability, remedies will be identified and recommended appropriately.


The objective of this step was to conduct a more detailed analysis of the operator’s piping. After identifying the damage mechanisms and implementing IOWs at the system level, the team took a deeper dive into specific circuits, which enabled a deeper and more accurate understanding of specific process conditions and their potential impact on materials of construction and damage mechanisms.

For this project, the established systems were further broken down into circuits based on relevant considerations such as temperature, fluid velocity, and piping material.

ISO Correlation

The next task involved taking the circuits from the previous circuitization step and translating them to the inspection drawings, allowing inspectors to identify the start and stop points in the field and correlate CMLs to the circuits.  Additionally, during circuitization and ISO correlation, the team validated, defined, and marked deadlegs of potential concern within the systems.

Circuit-Level DMR

Damage mechanisms for each circuit were identified based on the materials, design and operating properties, and practices. The mode of damage, e.g., localized versus generalized corrosion, cracking, metallurgical, mechanical, creep, and brittle fracture, were considered along with their potential damage severity.

CML Optimization

Next, the Pinnacle team completed CML Optimization, which included the identification of potential CMLs and selected CMLs:

  • Potential CMLs (PCML): Understanding and leveraging the damage mechanisms identified during the circuit-level DMR and IOW meeting, the team identified every PCML location per associated inspection strategy development.
  • Selected CMLs: Once all PCMLs were placed, the team totaled the counts and applied inspection requirement guidance, which leverages pipe class and damage mechanism susceptibility.
  • CML Numbering and Datamining: Once selected CMLs were finalized, the team numbered them per circuit according to flow order. Flow order is important in understanding and using the system dynamics. Key information was populated for each selected CML, which included damage mechanism morphology, characteristic locations of damage (e.g., weld, HAZ, base material, elbows, 12 o’clock position, ID or OD, extrados versus intrados, etc.) diameter, component type, pipe schedule, nominal thickness, and corrosion allowance. This information was utilized to determine the proper methods of inspection (e.g., radiographic or ultrasonic), which would be captured in the CML nomenclature as displayed (i.e., consistent with) in the refinery’s inspection data management system (IDMS) and piping isometrics. This information was also confirmed for accuracy or flagged for an update when a selected CML coincided with an existing CML.

Inspection, Testing, and Preventive Maintenance (ITPM)

The team conducted a high-level review of the elements required for an inspection per circuit. During this review, the team detailed crucial information, nondestructive evaluation (NDE) types, selected CML counts, and deadlines/inspection intervals which gave inspectors a chance to review the drawings and provide feedback. The team also aligned its inspection efforts so that previously planned activities could be expanded to acquire readings and groups of CMLs could be evaluated together to avoid large numbers of CMLs coming due for inspection at the same time.

Asset Strategy

A formal asset strategy report was delivered during the project. These asset strategies detailed the necessary resources needed to complete the work and any potential issues that could occur. Most importantly, these strategies illustrated how inspections would be affected by the changes in inspection methods and pipe class. The report also included summaries of the new CMLs and NDE inspection methods.

IDMS Update

The Pinnacle team then uploaded the information into the IDMS. This step will help the team better maintain its asset history and will create standardization across multiple sites. This database will include all validated information and assessments, previous inspection results and summaries, and will alert facility leadership of future inspection dates.


Evergreening is one of the most important aspects of a MI program. During this stage, the Pinnacle team worked with the site’s employees to sustain the MI program over time. The evergreening phase consists of helping the sites manage any changes and evaluate the above steps for potential changes that need to be implemented. For example, one site may need to replace a carbon steel circuit with stainless steel. For that change to be implemented smoothly, the team would review existing damage mechanisms and susceptibilities and likely need to update damage mechanism assignments, exclusions, and susceptibilities. Additionally, the team would rework the placement of potential CMLs and would need to check the selection requirements based on the new information. Following that, the team would rework the selection, update the ISOs with the new CML locations and names as well as other critical fields, and would ensure these updates are captured in the IDMS.

Additionally, the program implementation established a systematic inspection strategy that can be replicated across additional sites and equipment types. The standardization of inspection strategies at these four sites reduced the operator’s CML count by 27.4% and enabled the operator to proactively identify, manage, and mitigate LOC risks, helping the operator meet compliance.


The program implementation yielded a 3X ROI for the operator. The inspection strategies cost approximately $100MM to implement and the resulting deliverables identified over 200 integrity threat recommendations (ITRs). Since these ITRs were identified prior to causing failures, the operator was able to prevent the significant costs that would occur if the piping failed. The operator calculated that if 50% of these threats had resulted in failure events, the probable cost of incidence (COI) would have totaled hundreds of millions of dollars.

Additionally, the program implementation established a systematic inspection strategy that can be replicated across additional sites and equipment types. The standardization of inspection strategies at these four sites reduced the operator’s CML count by 27.4% and enabled the operator to proactively identify, manage, and mitigate LOC risks, helping the operator meet compliance.


The implementation of piping inspection strategies helped the refiner take a step towards having a more integrated, holistic MI program. With these new strategies in place, the operator is able to better focus its approach to risk management across various classes of piping and will ensure that these sites meet compliance.

Failure Modes and Effects Analysis (FMEA) In Newton™

How to conduct Failure Modes and Effects Analysis (FMEA) in Newton™

Production facilities are large and complex.  To optimize performance and manage risk, the operations and maintenance teams need to understand the equipment function, how it can fail, and the consequences of that failure.  Only then can they build asset strategies that will minimize impacts from equipment failure. To accomplish this difficult task, the industry best practice is to complete a Failure Modes and Effects Analysis (FMEA). Traditionally, this is done with on-site interviews with subject matter experts (SME) and maintenance personnel that are familiar with site operations. During these interviews, the SME is asked a series of questions to subjectively determine failure modes, mechanisms, likelihood of failure, and severity of the event. The information gathered is then used to design an asset strategy that addresses the findings of the FMEA. Although this approach has some benefits, it is primarily subjective and heavily reliant on collective or passed down knowledge. It is also time-consuming and can lead to inconsistencies and strategies that are ineffective.

Alternatively, Newton™ offers a quantitative approach to FMEA.  Newton™ connects every facet of reliability and is the only software application in the world that facilitates the Quantitative Reliability Optimization (QRO) methodology. The analysis starts by defining the asset register and creating a facility model that is used to calculate the production losses that can result from equipment failure. The Newton™ framework utilizes asset templates (fixed and rotating equipment) and walks the user through customizing each asset’s function, components, and failure modes. Leveraging data from computerized maintenance management systems (CMMS), process historians and production loss accounting; the failure modes, probability of failure, and consequences are quantified into statistical distributions, eliminating subjectivity. Lastly, this comprehensive quantitative model is calculated and used to assess criticality. From there, optimal asset strategies can be implemented from scratch, or customized from available templates. Producing asset strategies quantitatively produces results that are more specific and consistent than relying on qualitative methodology.

Moreover, the model can now be utilized to dynamically learn and update probability of failure using condition-based monitoring data. This creates an evergreen model that automatically responds to new data and threats. This also allows the user to quantify the effectiveness of the existing maintenance strategies and provides a systematic approach to continually improve their reliability program.

What is an FMEA?

A Failure Modes and Effects Analysis (FMEA) is a foundational application used to gather data in reliability programs such as Reliability Centered Maintenance (RCM) or QRO. It is a step-by-step qualitative process used to pinpoint the functional requirements of an asset, system, or unit.

What Makes Newton™ Different?

Conducting an FMEA in Newton™ with quantitative methodology and recommendations provide several benefits, including:

  • A more accurate criticality ranking
  • Cost Savings due to reduction of overly conservative recommendation
  • Less time required from site SMEs
  • Faster implementation through templating and data-driven analytics
  • Faster time to value
  • Improved production performance
  • A dynamic, learning model to keep recommendations up to date
  • A consistent approach used for all assets, rotating and fixed

To understand more about conducting FMEAs in Newton™ or creating a data-driven reliability program schedule a discovery call.

Moving from Reactive to Proactive Reliability Strategies: Is it Worth the Effort?

Over the last two decades, facilities have developed an array of asset management programs to improve compliance and overall risk management, but they largely remain in a state of reactivity. Or, at times, the list of activities can be overwhelming and make it hard to prioritize what to do first. Facilities are constantly working in defense mode rather than offense, with never ending piles of work orders stacking up. But what if facilities were able to predict the bad and mitigate it before it happened?

A Look at Traditional Proactive Strategies

Reliability can be broken down into two main categories Mechanical Integrity/Risk-Based Inspection (MI/RBI) and Reliability Centered Maintenance (RCM) strategies.  Before risk-based strategies, chemical and oil and gas facilities achieved asset integrity through a Time-Based Approach (TBA).

Risk Based Inspection (RBI) Programs

The TBA is highly inefficient because it often results in large amounts of resources being spent disproportionally to the risk of each asset. These dated practices also do little to encourage the operator to become deeply educated on their units, systems, or the interdependent effects of operating practices on different types of equipment. A TBA is overly conservative and is often wasting time and money inspecting assets that do not need it or sometimes not inspecting often enough. An RBI program considers past events, current conditions, and can help predict the best future operating practices and their effects on the integrity of the equipment.

Reliability Centered Maintenance (RCM) Programs

On the non-fixed equipment side, RCM has traditionally been thought of as a one-time study that follows a systematic set of questions called a Failure Modes and Effects Analysis (FMEA) and identifies a set of tasks to mitigate these failure modes. Equipment criticality is defined based on the consequence of these failures.

An enhancement to the traditional RCM methods was the introduction of risk into the analysis. Risk is defined as the product of the Probability of Failure (POF) X Consequence of Failure (COF). Most RCM analyses in the last two decades have incorporated probability into the analysis to define equipment criticality by developing risk matrices for various criteria such as safety, environmental impact, production loss, financial impact, and reputation impact.

There are some limitations to these methods that cannot be ignored. First, these programs are only as good as the data you put into them. RBI and RCM models are static and require constant evergreening in order to produce accurate results. Special emphasis programs such as traditional CML Optimization methods, often require adding more CMLs but rarely evaluate if any CMLs can be removed. Why do we often still feel like we are in a reactive mode with these traditional proactive methods?

Overcoming Model Limitations with QRO

Quantitative Reliability Optimization (QRO) brings a new approach to reliability—the first model to incorporate both fixed and non-fixed assets into one analysis—and can optimize tasks for an entire facility based on how they all operate together. QRO demonstrates an approach that ensures all reliability decisions have strong financial backing at a plant level. Because of this, QRO can forecast which assets drive changes, associated costs, and risks for both today and in the future. It can also forecast which assets drive changes, associated costs, and risks. This forecast is powered by individual assets in the model singularly shaped off individual data points to roll up to a System Model.

A System Model is where a reliability block diagram is built of all the assets in your facility. It defines the relationships among your assets, such as whether they are parallel or in series or whether there is a percentage throughput that needs to happen through them. They can then be visualized much like a Process Flow Diagram (PFD) for a Reliability and Maintainability (RAM) analysis, so there is a calculated model that is continuously being updated with how each asset’s uptime affects the system or plant’s uptime. From there, you can drill down on each asset and look at economic risk or health and safety (HSE) risk.

Using the Data, You Already Have

QRO is powered by the data you already have from your Inspection Data Management Software (IDMS), Computerized Maintenance Management System (CMMS), data historians, vibration analysis, and other sources through easy imports or software connectors. QRO is not meant to replace those tools; instead, it harnesses and leverages all that data together in one location and one model.

But what if you don’t think you have enough data to build a program? You don’t need decades of data to begin a QRO program. Using SME as your baseline data point and now gather a new data point. Just by adding one data point you can now extend the forecast failure window by five years.

Image 1: Baseline exchanger tube bundle thinning LVC
Image 2: Same exchanger as Image 1 with additional data from an Eddy current inspection


QRO harnesses the power of data science to supplement facility data. The example below walks through an Asset Risk Analysis (ARA) for an individual fan. The fan is broken down into a few major components, the bearings and the impeller, but it could be broken down into more complexity if chosen. An FMEA was set up with selected failure modes, what is driving those failure modes, and within those failure modes, how are they going to fail from a modeling standpoint.

Image 3: Example FMEA dashboard for piece of equipment
Image 4: Example of in depth look at a POF model

Weibull analysis parameters priors are set up and require condition monitoring from vibration feeding into this failure mode. So now it can constantly update for a live risk and level of uncertainty. Now we need to understand the impact of that failure, which, in this case, is a one-day outage equaling about $3M in lost production. So, then the question is, “how do I know when this will break beyond just the Weibull parameters I have set up?” The model attaches vibration monitoring readings and plots the results onto a Lifetime Variability Curve (LVC). The LVC, based on Bayesian statistical models, can forecast the point we expect the earliest onset of damage, the most likely failure point, and the latest possible failure point.

Image 5: Example LVC showing predicted failure after next planned shutdown

Each one of the bearings has its own criteria for failure and its own calculations for its forecast. One shows that there will be a failure prior to the next shutdown, so what do you do?

Image 6: Example LVC showing predicted failure prior to next planned shutdown

View the tasks planned out and work history pulled from the CMMS for this component, and you can now start to plan out new tasks to help mitigate this risk. When you go to the results tab and understand that the worst actor CML (here the Fan’s Onboard Vertical Vibration sensor) in this vibration analysis funnels into a risk profile that shoots up from a 0% to 100% chance of failure in only a couple of months span that guarantees a failure.

Image 7: Example POF curve if equipment is let run to fail
Image 8: Example of worst actor CML list
Image 9: Example of overall asset risk without any mitigation

This early failure means that we need to intervene. So, in this actual scenario with the client, that is what we did. We intervened by saying, “let’s do some maintenance now, but also plan out some more comprehensive repair tasks.” Now let’s fast forward a bit, and the image below shows what those forecasts look like after repairing the bearing component, plus one new vibration reading after installation, and see how much that LVC failure range is widened into the future, expected lifespan is being extended, and the effect on the POF distribution curve.

Image 10: New LVC for asset after repairing the bearing component and an additional vibration reading.
Image 11: New POF curve after repairing the bearing component and an additional vibration reading.

This mitigation hasn’t solved all the problems with this asset; we still need to plan the major repair/replace tasks for the next shutdown. Note that the blue line for the bearings component restarts after the May 2022 repair, but the impeller risk remains high on the solid yellow line, with the two combining to make the overall green asset risk line. Both components’ Mitigated Risk (dashed lines) drop in May 2023, given the added plan for full asset repair/replace task during next year’s shutdown.

Image 12: Example of risk reset after repairing bearing component
Image 13: Example of risk reset after repairing bearing component

To add additional economic justification to say these tasks were worthwhile, you can build out what-if scenarios and constrain different parameters around HSE, economic risk thresholds, and availability impacts. This allows you to make an A to B comparison with your plans and recognize what it was all worth. For our fan scenario, the overall plant availability has increased by over 1.6%.  You can also see that before the repair, the fan was the number two worst actor, and after the repair, it dropped off from the top ten worst actor list.

Image 14: Example of overall scenario comparison, showcasing facility availability
*Note: The fan in this example has completely dropped off the bad actors list, but also other assets on the list have changed based on the effect of changes made.

Fill out the form below to watch a Pinnacle team member discuss how to move from Reactive to Proactive


Proactive programs boast numerous benefits such as optimized operations, maintenance, and inspection planning, reduced downtime, and efficiencies in SME utilization. Industrial industries have made improvements over the years beginning to transition from time-based to risk-based, but facilities are still all over the map in terms of program maturities. With the advances of reliability continuing with QRO, facilities now have a way to evolve no matter how much historical data they have or where they fall on the maturity scale.

Schedule a 30-minute discovery call to learn how QRO can help evolve your program.

The Power of Image Analytics in Mitigating Corrosion

Inspectioneering Journal, Nov/Dec 2022 Issue 

How Image Analytics Reduced Data Capture Costs by 90% and Inspection Time by 50% for Two Operators

If not properly identified and managed, corrosion can have a catastrophic impact on a facility. To mitigate the cost of asset failure and downtime that can result from corrosion, facilities must adopt a robust, proactive approach to corrosion assessment and mitigation. However, proactive corrosion management requires a sizable amount of data collection, processing, and interpretation, and many facilities struggle to allocate the budget and resources needed for this approach.

In this article, we discuss two use cases of how advanced visual data capture and image analytics can be used by multiple industries to drive better reliability decisions:

  1. Coating Optimization for the Upstream Industry: An image analytics study for an upstream operator projects that new 360° cameras reduce the cost and time to capture and process corrosion data by 90%, to optimize coating programs.
  2. Systematic Identification of External Corrosion in Midstream and Downstream Industries: An image analytics proof of concept (POC) showed that a machine learning model could be trained to detect external corrosion more systematically and reduce inspection time by 50%. Additionally, this model minimizes human subjectivity in inspections and creates consistency in both coating and inspection grading.

Hear from the Author

Interested in learning more? Check out the video below where Sid discusses how recent advancements in visual data capture and image analytics are proving to be effective tools in helping facilities upgrade their approach to corrosion management.

Research and Development Roundtable: CML Optimization

Pinnacle R&D Roundtable: The Premise

Pinnacle is actively working to advance the mechanical integrity industry alongside our customers, partners, and other industry leaders. We value collaboration and believe there is a lack of effective knowledge sharing, networking, and the free flow of ideas in the market. For that reason, Pinnacle is hosting exclusive roundtables, where leading-edge ideas related to better using data-driven approaches to mechanical integrity can be explored in confidence with like-minded MI leaders.


  • Are Mechanical Integrity leaders, that have shown a desire to move the industry forward

  • Have financial responsibility for inspection or plant integrity/reliability for fixed equipment

  • Have an ability to evaluate historical data and prioritize inspection using data analytics

  • Desire to optimize piping and pressure vessel CML inspection

  • Desire to quantify the value of inspection and incident avoidance in financial terms

CML Optimization

The traditional approach to CML Optimization has been the elimination of CMLs. We believe that the focus should first and foremost be on effective risk management (maintain or reducing risk) and production impact. Condition Monitoring Optimization is a new, data-driven methodology for developing inspection scope, techniques, and intervals that are dynamically updated as information becomes available. Optimization enables facilities to confidently prioritize inspection of CMLs and identifies where additional data is required, where inspection adds little or no value, or when corrective maintenance is needed. A case study will be provided where an optimized monitoring program for was developed for an energy company identifying CML inspection that reduces risk and increases availability. The program prioritizes CML inspection schedule and scope while extending intervals for over 50% of the total population. In addition, remaining risk and downtime contributors were mitigated by identifying other inspection and monitoring techniques to further reduce loss potential.

The program does the following:

  1. Identifies CMLs that fail a specific risk criteria within the plan time.
  2. Identifies CMLs that potentially reduce production reported in terms of availability.
  3. Evaluates the circuit data statistically to characterize thinning type behavior (general versus local).
  4. Identifying CMLs with high POF/risk but low uncertainty.
    • Localized CML circuit inspection targets a specific statistical confidence desired through inspection.
    • Generalized CML circuit inspection targets the % coverage desired to achieve the desired confidence.

In addition, we will discuss how we are analysis using minimum required thickness (tmin) as our failure criteria (critical thickness) as well as exploring the option of using a fitness for service (FFS), rerated minimum thickness, and reduced safety factors.

Request to Attend a Roundtable

Fill out the form below and a member of our team will contact you shortly.

Combating the Complexity of PRD RBI with a New API RP 581-Based Tool: A Case Study

Pressure Relief Devices (PRDs) play a critical role in preventing overpressure events that can result in loss of containment, equipment damage, and unplanned downtime for pressure vessels, boilers, and other pressurized systems. Knowing which PRDs present the greatest risk to your facility and how often they need to be inspected is critical to mitigating the risk of PRD failure.

Pressure Relief Device Risk-Based Inspection (PRD RBI) is used to assign risk to PRDs based on probability of failure (POF) and consequence of failure (COF) calculations and can help your facility adjust its inspection intervals and resource allocations based on this calculated level of risk. However, despite the multitude of benefits PRD RBI provides, this approach has not been easily adopted by the industry due to its complexity.

Many facilities use API RP 581 methodology as a starting basis for PRD RBI. However, existing API 581 RBI PRD calculations are inconsistent, challenging to interpret, and contain errors. To address these concerns, we developed a tool with model improvements to the calculations. Additionally, the proposed updates to these calculations were submitted and accepted by the API RP 581 subcommittee and will be included in the Fourth Edition release.

Learn more about the project that resulted in the development of the tool below:

Common Industry Challenge

A refinery historically inspected its PRDs on time-based intervals that generally occurred every 5 to 10 years. With all PRDs on a fixed interval inspection program, the facility exposed themselves to higher risks, costs, and missed opportunities due to not inspecting some valves enough and over-inspecting others. Additionally, the fixed interval program led to unnecessary constraints in turnaround planning.

To combat these challenges, facility leadership decided to switch to an RBI approach for its PRDs. Implementing this type of approach would help the facility lower its costs and better understand how it could decrease the risk of failure. Additionally, a risk-based approach would help the facility extend its inspection and test schedules to better align with its turnaround schedule, as appropriate.

The refinery collaborated with our team to implement RBI intervals for its PRDs. The team leveraged API RP 581 RBI calculations for PRDs and found that the existing calculations contained multiple errors and the documentation was difficult to follow.

Additional issues included:

PRD Inspection Updating:

The current approach calculated incorrect risk results. For example, a B-level inspection for a PRD produced lower risk than an A-level inspection. Additionally, the approach showed lower risk for PRDs that did not get inspected than those that had any level of inspection. There was a need to modify the approach so that the confidence in the PRDs’ condition would be debited and credited, as appropriate, and that the risk was correctly calculated, correcting for the inspection and no-inspection cases.

PRD Component Probability of Failure (POF):

The current API 581 equation for component POF applies to PRDs with a design margin of four. This introduces error for assets with differing design margins.

Damage Adjusted Component POF:

The damage factor and POF are calculated at the normal operating pressure. The current probability of failure on demand (POFOD) calculation needed to be modified to allow for various design margins as well as limit the POF to a maximum of 1. This equation for PRD-protected component POF assumes that the component’s POF is constant over time and adjusts the POF for the overpressure scenarios.

Because the facility needed a technically sound, industry-accepted approach to drive its PRD inspection intervals, our team worked with the refinery’s leadership to develop a tool that would correctly execute API 581 PRD RBI calculations.

Watch Lynne Kaley, Director of Reliability Strategy discuss how the team adjusted the API 581 PRD RBI calculations to rank the risk of this facility’s PRDs more accurately.

Developing a Simplified Tool to Address a Complex Problem

To provide the facility with an industry-accepted approach that effectively addressed the gaps in API RP 581’s PRD calculations, Pinnacle determined the development of a tool that better calculated the POF and COF of the facility’s PRDs while establishing credible inspection periods was required. The tool took approximately four months to create and three months to validate with the facility.

The development of the tool required translating and programming the document logic in API RP 581 so that each intermediate step of the calculation could be analyzed and transparent to the facility. Making the results of every intermediate step visible was crucial in determining which steps were causing erroneous results. The tool now ensures that each step of the PRD RBI process makes sense and can be seen by facility personnel. Additionally, the tool:

  1. Modifies the level of inspection confidence factors and adds ineffective confidence factors.
  2. Adjusts component POF based on the assigned overpressure for each damage case. For the POFOD equations, the tool corrects dimensionless design margin factors and limits the component POF to a maximum of 1. The POF equation can be modified to accommodate different design margins instead of only being accurate for the most common design margin of four. The design margin factor extends the use of this equation to any construction code year and design margin.
  3. Combines two separate calculations, one with and one without inspection history, for POFOD which reconciled the API RP 581 calculations.

After the risk and inspection planning results were tested, the recommended updates to the calculations were submitted to API. These changes were balloted, accepted, and will be reflected in the Fourth Edition release. To the best of our knowledge, this tool is the only RBI tool that leverages the corrected API 581 calculations for PRDs. Depending on the number of PRDs in a study and quality of facility data, a pilot can be completed in one to four months for most facilities.

How Did the Tool New Tool Impact the Facility’s Risk Results?

By using the new tool to conduct PRD RBI, the refinery was able to review risk results and pinpoint the variables that were driving the risks for its PRDs. For example, the refinery was able to determine, document, and verify the low risk associated with the PRDs protecting its equipment in water services. Having the ability to pinpoint this risk enabled the refinery to identify issues and causal factors, and ultimately, determine the optimal timing and actions needed to mitigate risks. Additionally, having the ability to confidently identify the risk associated with specific PRDs will help the facility avoid unnecessary inspections and better optimize its budget.

The below graphs are generated by the tool and illustrate how the risk of POFOD for one of the refinery’s PRDs becomes more accurate as updates to the calculations are implemented.

Figure 1 shows the POFOD for different scenarios and illustrates how the original API 581 RBI PRD calculations incorrectly assign a “no inspection” scenario (illustrated by the navy-blue line) with a lower risk than scenarios where inspection credit should have been assigned.

Figure 1: Inspection POFOD vs. Time – Current Approach: The original POFOD results prior to updated API 581 calculations.

Figure 2 illustrates the impact that updating the calculations had on the PRD’s POFOD as the team developed the tool, specifically to the “ineffective” inspection scenario. In this figure, the “no inspection” scenario has been re-assigned a higher risk and labeled as “ineffective modified” (illustrated by the dark green line).

Figure 2: Inspection POFOD vs. Time – Modified Ineffective Inspection

Figure 3 shows the calculated risk after the tool was completed. The “ineffective modified” inspection scenario is correctly assigned a higher risk of POFOD followed by a “Fairly Effective, C Fail” scenario.

Figure 3: Inspection POFOD vs. Time – Modified B Inspection: The POFOD after the tool was completed, which accurately calculates the risk of POFOD over time for the refinery’s PRD.


With a new risk-based inspection (RBI) tool for PRDs, facilities can now have the ability to calculate the POF and COF of PRDs with greater accuracy. PRDs that do not get inspected are now penalized for not having an inspection completed and show a greater risk of failure than those that had any other level of credible inspection. Additionally, the PRD calculations now apply to PRDs with a variety of design margins and POFOD is set to a maximum of 1, allowing this methodology to be applied across a variety of assets across facilities.

Learn more about how this tool can help you set more appropriate intervals and deploy your resources more effectively through PRD RBI.

Condition Monitoring Location (CML) Optimization in Newton™

How CML Optimization Prioritizes in Newton™

In heavy-process industries, facilities do all they can to minimize risk. They encounter risks associated with production loss and, more importantly, safety. For fixed equipment, we typically manage risk by performing routine inspections across multiple condition monitoring locations (CMLs) in order to monitor degradation rates, quantify damage states, and detect problems before they occur. Subject Matter Experts (SMEs) are used to select potential locations, but, by nature, can be very conservative due to industry standards and the high consequences of an error. This can result in a very high number of CMLs and significant cost to maintain these programs.

To solve this problem, we have developed a data-driven approach to CML optimization, which quantitatively optimizes inspection scope and intervals based on historical inspection data and subject matter expert input. Leveraging Newton™, our quantitative analytics platform, our machine learning models combine expected SME-assigned corrosion rates and observed CML thickness measurements to project future degradation and the associated uncertainty for each CML. This enables you to identify which CMLs account for the largest amounts of risk and potential production loss over time. The application identifies not only CMLs with low estimated remaining life but also identifies gaps where more data is needed to minimize the unknown, overall to optimize risk management.

What is CML Optimization?

CML optimization is an effort that ensures effective CML coverage and placement by deciding the most appropriate locations in which to take readings while also trying to reduce the strain on facility resources. CML optimization creates efficiencies by focusing on areas where corrosion or damage is likely to occur, resulting in the recommendation of removal of unnecessary CMLs that are not providing value, and, at times, an addition of CMLs where more monitoring may be necessary.

What Makes Newton™ Different?

CML Optimization in Newton sets itself apart from other technology in the industry by providing:

  • No subjectivity
  • Quantitative value proposition
  • A consistent and scalable approach
  • The combined power of data with SME guidance
  • An entire methodology based on reducing uncertainty & how each reading contributes to future uncertainty
  • An evaluation of all CMLs to get statistical model of the entire circuit
  • Projected performance based on Availability and not just risk

To understand how to better optimize your CMLs in Newton™ schedule a discovery call.

Mitigating the Threat of Corrosion Through Data-Driven Models  

Corrosion presents a significant threat to the integrity of many facilities. The key to mitigating the threat of corrosion is accurately predicting the corrosion rates of your assets. Leveraging a model that combines the computational strength of data science and the input and validation of Subject Matter Experts (SMEs) can help your facility estimate corrosion rates more accurately.  

Corrosion rate estimation is typically performed by SMEs who use historical process and equipment data, along with industry-standard tools and standards, to produce results. Although this analysis is helpful, this process has some limitations, such as each facility having unique corrosion profiles due to different environmental conditions, maintenance and operation practices, and other factors. In addition, SME-created corrosion models often lean on the conservative side.   

However, recent technological advancements are now providing facilities with access to an unprecedented amount of data —an amount that is almost impossible for a human or a team to analyze adequately. These large volumes of data can now be analyzed quickly and efficiently through the power of machine computation. By combining the knowledge of an SME with the analytical power of a machine, facilities can optimize suggested inspection tasks to create efficiencies and reduce costs.  

Developing the Model

The steps below demonstrate what a data-driven process can look like. We first begin with cleansing the data. After the data has been cleansed, it can then be fed into the machine. Once the machine has completed its analysis, SMEs can then review the results to verify they make sense. 

1. Cleanse the Data

2. Feed Data to the Machine

3. Validate with SME

1. Cleanse the Data

Before beginning any model, statistical tools and methods are used to cleanse the data. This is a basic but often overlooked step that, if skipped, can skew the results and provide an inaccurate model. Often, data is plagued with quality issues that can lead to bad decision-making. By presenting the machine with a cleaner data set, it will be able to make better, more accurate predictions. Data cleansing refers to the process of preparing data for analysis by removing or modifying data that is incorrect, incomplete, irrelevant, duplicated, or improperly formatted. If we don’t cleanse the data, it creates a “garbage in, garbage out” scenario.  

2. Feed Data to the Machine

Once the data is prepared, it can be fed into the machine to train the machine to understand how corrosion rates manifest in the field. This is done through supervised machine learning, where the machine is fed data examples to learn patterns. For example, the machine is given operating and design data associated with circuits, such as temperature, pressure, stream constituents, metallurgy, and observed corrosion rate. We feed the machine more data examples like this, and as the machine is exposed to the data, it starts to learn relationships between how those pieces of data correlate with the corrosion rate observed in the field.  

After we’ve fed the machine all these examples, it will learn how and, in most cases, why the corrosion rate is presenting as it is. Then, we can use this to make predictions on areas or circuits the machine has never seen before. This could be a circuit with some attributes the machine has seen before but could have different variants, such as temperatures, metallurgy configurations, or other design or operating conditions—but because of what the machine has learned, it can make reasonable predictions for this new scenario.

3. Validate with SME

Finally, once the machine has provided its estimates, they are sent back to the SME for review and validation. The SME can help identify the need for deviations or make sure that the results make sense both from a unit history and industry expectations. Any updates that are made are then fed back into the machines to help them continue to learn for future analysis. This approach marries subject matter expert knowledge with data science which provides a solution that’s better than either party could offer independently. 


By combining the strengths of big data with subject matter expertise, we end up with the best of both worlds and with quality that exceeds what we’re able to do currently in the industry.

To learn more about how data science can be leveraged in predicting damage rates, watch the presentation below, where we discuss Pinnacle’s Reformer Study. This study compares the accuracy of asset degradation rates predicted by a machine learning model to the rates predicted by human SMEs applying current industry standards. Andrew Waters, Ph.D., and Fred Addington lead the discussion about how large data sets can be used to better predict asset degradation and the challenges of making “Big Data” work for facilities.

Using Data Science to Enhance Reliability: Four Real-World Applications

Inspectioneering Journal, Sep/Oct 2022 Issue 

The rise in computational power over the last decade has begged the question of if and to what extent quantitative methods such as data science have in improving reliability programs. While data science has the power to revolutionize the reliability industry, it will only be able to do so with strong guidance and review from subject matter experts (SMEs).

The ability to make better decisions by leveraging data continues to be a theme across the industry and will help decision-makers make more informed strategic decisions at a faster pace. This article highlights the efficacy of a combined SME and data science approach by showing four example applications:

  1. Using equipment data and associated corrosion rates across multiple reformer units to show how predictive models using data science compare to traditional industry templates and expertise-driven models.
  2. Leveraging Bayesian statistics to introduce uncertainty into remaining life calculations and probability of failure, empowering the expert to define variables better to identify and reduce uncertainty, improve equipment remaining life estimations, and reduce overall risk.
  3. Leveraging data science to quantify the confidence of damage detection, including driving benefit to cost for taking readings on or omitting particular condition monitoring locations (CMLs).
  4. Leveraging natural language processing on CMMS and IDMS data to identify anomalies for equipment that should have been flagged for positive material identification but were not.

Hear from the Authors

Interested in learning more? Check out the below video where Fred and Drew discuss how “Big Data” has the potential to improve how facilities can evolve their MI programs.

Case Study: Image Analytics Proof of Concept Results in External Visual Inspection Planning and Execution Efficiencies

Learn how Pinnacle leveraged image analytics to reduce costs and create staffing efficiency for several North American midstream and downstream operators. 


Several of Pinnacle’s customers were looking to perform external visual inspections faster and with fewer resources while not compromising quality.


Pinnacle teamed up with SoftServe to implement image analytics across several sites to complete a Proof of Concept (POC) for the new methodology to create efficiencies and improve inspection planning.


The POC proved that image analytics can reduce waste and inspection spend by reducing the overall variability and uncertainty in external corrosion due to human subjectivity in inspections. 


Corrosion management and asset inspections play a vital role in the safety and reliability of a facility and are key factors in financial planning. According to the Asset Integrity Management Global Market Report 2022, Mechanical Integrity (MI) costs are estimated to exceed $25B for 2022. Emphasizing the need for cost-effective and robust corrosion assessment and mitigation approaches is vital to preventing Loss of Primary Containment (LOPC) failures and unplanned downtime.  

Changing the inspection process from reactive to proactive corrosion management is one solution, but it increases the amount of data collected, processed, and interpreted. Adopting Artificial Intelligence (AI), robotics, and other emerging technologies is becoming increasingly essential to creating efficiencies in this process. 

The Challenge

While supporting inspection needs at a midstream facility in Texas, Pinnacle team members started to explore ideas to create efficiencies in collecting inspection data for the plant. A significant number of inspections were due or were coming due by the end of the year; the facility and Pinnacle had limited staff to complete the tasks in the given timeline. To combat these resource challenges, the team decided it was an opportune time to test a proof of concept for a new technology that could also create future efficiencies.   

Facilities are required to inspect assets regardless of the expected corrosion severity, whether dictated by a fixed interval or Risk-Based Inspection (RBI) program. One theory for this POC was that many executed inspections only identify rust and light corrosion and do not result in significant actions such as repairing or replacing the asset. The current inspection processes are subjected to human bias and leave room for error.  

The actual personnel time spent in the field completing the inspection activities is 40% to 50%. The remaining time is often spent gathering the documentation, creating the work package, identifying the location, and other administrative tasks. To reduce the overall inspection workload, Pinnacle chose to conduct an Image Analytics Proof of Concept (POC) using computer vision models. A computer vision model is a type of Machine Learning (ML) that is a processing block that takes uploaded inputs, like images or videos, and predicts or returns pre-learned concepts or labels. 

The goal of this POC was to prove the feasibility and business value of a computer vision-based approach for detecting and classifying three severity classes of corrosion on the panoramic images taken by handheld cameras during a facility inspection. In addition, the piloting, development, and scaling of the solution aimed to provide improvements in: 

  • Inspection automation 
  • Data-driven decision making 
  • Asset monitoring and forecasting 
  • Operations planning 

Pinnacle’s Solution

For this POC, Pinnacle partnered with SoftServe to build the computer vision algorithms needed to investigate how panoramic or 360-degree handheld images with corrosion labels can be used to train a ML model to automatically detect corrosion and calculate the percentage of corrosion coverage based on an object’s image. 

Pinnacle chose to pilot this approach simultaneously with two different customers. Each pilot was broken into five phases.  

Phase 1 Photo Collection

Pinnacle collected images and videos in the field using a Ricoh Theta Z1 360-degree camera. Equirectangular images with high resolution were gathered to conduct the analysis. Equirectangular images are single images stitched together from a 360-degree horizontal and 180-degree vertical view. 
Image 1: Example of an equirectangular image

Phase 2 Photo Preparation

Pinnacle used online tools for manual object labeling for inspections. Pinnacle then saved each photo with metadata, including circuit, photo ID, labeled objects, and asset tags for mapping.

Equirectangular images are complicated both for labeling and ML model training. Labeling includes highlighting or identifying a specific part of an image for training in the ML model.  Also, it is worth having a model for non-panoramic images for further perspective as it will be more usable for other data sources.

Image 2: Example of Photo Preparation for Labeling

Phase 3 Corrosion Detection

As part of a labeling workshop, the Pinnacle and SoftServe teams developed a unified and consistent approach to analyzing images to evaluate anomalies/damages for further calculating statistics of coating damage and corrosion severity levels. SoftServe tested several deep neural network architectures to find out which one is more effective for corrosion segmentation. Other parameters and improvements such as different loss functions, input sizes, and various image preprocessing were also tested and evaluated. This allowed the teams to find the optimal quality-wise configuration of the pipeline.

Equirectangular images are complicated both for labeling and ML model training. Labeling includes highlighting or identifying a specific part of an image for training in the ML model.  Also, it is worth having a model for non-panoramic images for further perspective as it will be more usable for other data sources.

Image 3: Example of Corrosion Segmentation Results

Phases 4 & 5 Report and Results Preparation and Feedback

Equirectangular images are complicated both for labeling and ML model training. Labeling includes highlighting or identifying a specific part of an image for training in the ML model.  Also, it is worth having a model for non-panoramic images for further perspective as it will be more usable for other data sources.


In the traditional way of conducting an inspection, an API 570 inspector would use a work package to inspect a particular pipe segment. Still, as they go out to inspect, they may see multiple issues in the surrounding area. Because of the systems in place, that inspector most likely will not document the problems they see and will only write a report for the assigned piping segment. Typically, additional reporting will only happen if something significant is observed, such as a leak.

With the solution described above, the cameras and computers capture everything in the frame, not just the pipe being inspected. An inspector can capture multiple assets simultaneously and perform a basic screening in one click. From there, an SME can evaluate it to pinpoint precisely what assets need additional attention.  Other important aspects to highlight are:

  • Screening can be conducted on an entire plant in just a matter of days, depending on plant size.
  • Personnel can capture the images in just a fraction of the time it would take to do planned inspections.
  • This methodology can be used to screen hard to reach areas with ease, such as pipe racks and high- temperature assets that would normally require shutdown or scaffolding to be built.

This POC created a heat map to identify areas of interest. It pulls up the associated image along with your GPS coordinates and gives a graph percentage of what kind of damage it sees. Computer imaging software and image analytics help eliminate human subjectivity. For example, an inspector may go into the field and make a recommendation to code align to a damage mechanism. Still, even when missing paint and light surface oxidation is observed during an SME review, the SME’s recommendation may be not to do anything.

Throughout the POC, SMEs validated the results, and in many cases, they agreed with the corrosion prediction from the computer vision models. Over 75% of the time, SMEs said that the results from ML algorithms met their expectations for external corrosion detection. Almost 50% of the time, the SMEs didn’t agree with each other, such as an inspector recommending coating replacement for low instances of corrosion and coating breakdown, but another SME disagrees with that recommendation. This leads us to believe that such a solution can help us standardize corrosion detection and drive the right actions to optimize cost and resources. In this case, accuracy is defined as what a human would identify as corrosion, and the computer was able to identify it as well. At the end of these POC projects, each customer was provided with the following deliverables:

  • Corrosion Segmentation Model- outlines the detection of corrosion and rust stains on images of the facilities taken by a panoramic camera.
  • Texture Classification ML Model- outlines the detection of paint defects, rust staining, light, and severe corrosion on close-up images of the stationary equipment.
  • User interface (UI)- browser application hosted on the secured Virtual Machine (VM), allowing users to tune processing parameters, run ML models in real-time on the selected images, and see the big picture on the map. The UI enables the user to process images one by one with a preview or batch processing of the entire directory passed as a parameter.
  • Technical Report- the report outlines the chosen approaches and techniques for building the pipeline and incorporates the results of the main experiments with the conclusions provided and potential improvements for future development.


With the completion of this POC, Pinnacle has proved that image analytics can reduce waste and overall costs by reducing the overall variability and uncertainty in external corrosion due to human subjectivity in inspections. Another benefit to this methodology is that it can be used at any level of the plant including a full plant-wide assessment. The next phase of this POC is to leverage a similar solution for Thermal Corrosion Under Insulation (CUI) anomaly screening, structural anomalies, and other advanced screening methods.

The overall goal of this methodology is to aid SMEs and plant managers in optimizing their field resources to get repeatable, reliable data without subjectivity. Eventually, Pinnacle aims to complete an overall external 510/570 governed API inspection for an entire unit, such as a crude unit, within a two-to-three-week period.

Contact us to speak to one of our experts about how Pinnacle can help evolve your inspection program.